00:00:00.001 Started by upstream project "autotest-per-patch" build number 126217 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.224 > git --version # 'git version 2.39.2' 00:00:00.224 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.929 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.941 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.952 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.952 > git config core.sparsecheckout # timeout=10 00:00:05.965 > git read-tree -mu HEAD # timeout=10 00:00:05.981 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.034 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.034 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.128 [Pipeline] Start of Pipeline 00:00:06.141 [Pipeline] library 00:00:06.143 Loading library shm_lib@master 00:00:06.143 Library shm_lib@master is cached. Copying from home. 00:00:06.157 [Pipeline] node 00:00:21.159 Still waiting to schedule task 00:00:21.159 Waiting for next available executor on ‘vagrant-vm-host’ 00:00:40.260 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:40.262 [Pipeline] { 00:00:40.273 [Pipeline] catchError 00:00:40.275 [Pipeline] { 00:00:40.292 [Pipeline] wrap 00:00:40.302 [Pipeline] { 00:00:40.314 [Pipeline] stage 00:00:40.316 [Pipeline] { (Prologue) 00:00:40.339 [Pipeline] echo 00:00:40.341 Node: VM-host-SM17 00:00:40.347 [Pipeline] cleanWs 00:00:40.356 [WS-CLEANUP] Deleting project workspace... 00:00:40.357 [WS-CLEANUP] Deferred wipeout is used... 00:00:40.364 [WS-CLEANUP] done 00:00:40.521 [Pipeline] setCustomBuildProperty 00:00:40.618 [Pipeline] httpRequest 00:00:40.641 [Pipeline] echo 00:00:40.643 Sorcerer 10.211.164.101 is alive 00:00:40.650 [Pipeline] httpRequest 00:00:40.654 HttpMethod: GET 00:00:40.654 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:40.655 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:40.656 Response Code: HTTP/1.1 200 OK 00:00:40.656 Success: Status code 200 is in the accepted range: 200,404 00:00:40.657 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:41.497 [Pipeline] sh 00:00:41.775 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:41.793 [Pipeline] httpRequest 00:00:41.813 [Pipeline] echo 00:00:41.815 Sorcerer 10.211.164.101 is alive 00:00:41.822 [Pipeline] httpRequest 00:00:41.826 HttpMethod: GET 00:00:41.826 URL: http://10.211.164.101/packages/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:41.827 Sending request to url: http://10.211.164.101/packages/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:41.835 Response Code: HTTP/1.1 200 OK 00:00:41.836 Success: Status code 200 is in the accepted range: 200,404 00:00:41.836 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:53.855 [Pipeline] sh 00:00:54.193 + tar --no-same-owner -xf spdk_bdeef1ed399c7bd878158b1caeed69f1d167a305.tar.gz 00:00:57.492 [Pipeline] sh 00:00:57.777 + git -C spdk log --oneline -n5 00:00:57.777 bdeef1ed3 nvmf: add helper function to get a transport poll group 00:00:57.777 2728651ee accel: adjust task per ch define name 00:00:57.777 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:57.777 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:57.777 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:57.798 [Pipeline] writeFile 00:00:57.816 [Pipeline] sh 00:00:58.096 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:58.108 [Pipeline] sh 00:00:58.388 + cat autorun-spdk.conf 00:00:58.388 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.388 SPDK_TEST_NVMF=1 00:00:58.388 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.388 SPDK_TEST_URING=1 00:00:58.388 SPDK_TEST_USDT=1 00:00:58.388 SPDK_RUN_UBSAN=1 00:00:58.388 NET_TYPE=virt 00:00:58.388 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.395 RUN_NIGHTLY=0 00:00:58.397 [Pipeline] } 00:00:58.414 [Pipeline] // stage 00:00:58.431 [Pipeline] stage 00:00:58.433 [Pipeline] { (Run VM) 00:00:58.444 [Pipeline] sh 00:00:58.718 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.718 + echo 'Start stage prepare_nvme.sh' 00:00:58.718 Start stage prepare_nvme.sh 00:00:58.718 + [[ -n 5 ]] 00:00:58.718 + disk_prefix=ex5 00:00:58.718 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:58.718 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:58.718 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:58.718 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.718 ++ SPDK_TEST_NVMF=1 00:00:58.718 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.718 ++ SPDK_TEST_URING=1 00:00:58.718 ++ SPDK_TEST_USDT=1 00:00:58.718 ++ SPDK_RUN_UBSAN=1 00:00:58.718 ++ NET_TYPE=virt 00:00:58.718 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.718 ++ RUN_NIGHTLY=0 00:00:58.718 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:58.718 + nvme_files=() 00:00:58.718 + declare -A nvme_files 00:00:58.718 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.718 + nvme_files['nvme.img']=5G 00:00:58.718 + nvme_files['nvme-cmb.img']=5G 00:00:58.718 + nvme_files['nvme-multi0.img']=4G 00:00:58.718 + nvme_files['nvme-multi1.img']=4G 00:00:58.718 + nvme_files['nvme-multi2.img']=4G 00:00:58.718 + nvme_files['nvme-openstack.img']=8G 00:00:58.718 + nvme_files['nvme-zns.img']=5G 00:00:58.718 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.718 + (( SPDK_TEST_FTL == 1 )) 00:00:58.718 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.718 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.718 + for nvme in "${!nvme_files[@]}" 00:00:58.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:58.718 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.718 + for nvme in "${!nvme_files[@]}" 00:00:58.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:58.718 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.718 + for nvme in "${!nvme_files[@]}" 00:00:58.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:58.718 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:58.718 + for nvme in "${!nvme_files[@]}" 00:00:58.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:58.718 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.718 + for nvme in "${!nvme_files[@]}" 00:00:58.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:58.718 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.718 + for nvme in "${!nvme_files[@]}" 00:00:58.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:58.718 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.718 + for nvme in "${!nvme_files[@]}" 00:00:58.718 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:59.285 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.285 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:59.285 + echo 'End stage prepare_nvme.sh' 00:00:59.285 End stage prepare_nvme.sh 00:00:59.295 [Pipeline] sh 00:00:59.574 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:59.574 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:00:59.574 00:00:59.574 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:59.574 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:59.574 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:59.574 HELP=0 00:00:59.574 DRY_RUN=0 00:00:59.574 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:59.574 NVME_DISKS_TYPE=nvme,nvme, 00:00:59.574 NVME_AUTO_CREATE=0 00:00:59.574 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:59.574 NVME_CMB=,, 00:00:59.574 NVME_PMR=,, 00:00:59.574 NVME_ZNS=,, 00:00:59.574 NVME_MS=,, 00:00:59.574 NVME_FDP=,, 00:00:59.574 SPDK_VAGRANT_DISTRO=fedora38 00:00:59.574 SPDK_VAGRANT_VMCPU=10 00:00:59.574 SPDK_VAGRANT_VMRAM=12288 00:00:59.574 SPDK_VAGRANT_PROVIDER=libvirt 00:00:59.574 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:59.574 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:59.574 SPDK_OPENSTACK_NETWORK=0 00:00:59.574 VAGRANT_PACKAGE_BOX=0 00:00:59.574 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:59.574 FORCE_DISTRO=true 00:00:59.574 VAGRANT_BOX_VERSION= 00:00:59.574 EXTRA_VAGRANTFILES= 00:00:59.574 NIC_MODEL=e1000 00:00:59.574 00:00:59.574 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:59.574 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:02.855 Bringing machine 'default' up with 'libvirt' provider... 00:01:03.814 ==> default: Creating image (snapshot of base box volume). 00:01:03.814 ==> default: Creating domain with the following settings... 00:01:03.814 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721060148_013f82bab36cf1e8801f 00:01:03.814 ==> default: -- Domain type: kvm 00:01:03.814 ==> default: -- Cpus: 10 00:01:03.814 ==> default: -- Feature: acpi 00:01:03.814 ==> default: -- Feature: apic 00:01:03.814 ==> default: -- Feature: pae 00:01:03.814 ==> default: -- Memory: 12288M 00:01:03.814 ==> default: -- Memory Backing: hugepages: 00:01:03.814 ==> default: -- Management MAC: 00:01:03.814 ==> default: -- Loader: 00:01:03.814 ==> default: -- Nvram: 00:01:03.814 ==> default: -- Base box: spdk/fedora38 00:01:03.814 ==> default: -- Storage pool: default 00:01:03.814 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721060148_013f82bab36cf1e8801f.img (20G) 00:01:03.814 ==> default: -- Volume Cache: default 00:01:03.814 ==> default: -- Kernel: 00:01:03.814 ==> default: -- Initrd: 00:01:03.814 ==> default: -- Graphics Type: vnc 00:01:03.814 ==> default: -- Graphics Port: -1 00:01:03.814 ==> default: -- Graphics IP: 127.0.0.1 00:01:03.814 ==> default: -- Graphics Password: Not defined 00:01:03.814 ==> default: -- Video Type: cirrus 00:01:03.814 ==> default: -- Video VRAM: 9216 00:01:03.814 ==> default: -- Sound Type: 00:01:03.814 ==> default: -- Keymap: en-us 00:01:03.814 ==> default: -- TPM Path: 00:01:03.814 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:03.814 ==> default: -- Command line args: 00:01:03.814 ==> default: -> value=-device, 00:01:03.814 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:03.814 ==> default: -> value=-drive, 00:01:03.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:03.814 ==> default: -> value=-device, 00:01:03.814 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.814 ==> default: -> value=-device, 00:01:03.814 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:03.814 ==> default: -> value=-drive, 00:01:03.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:03.814 ==> default: -> value=-device, 00:01:03.814 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.814 ==> default: -> value=-drive, 00:01:03.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:03.814 ==> default: -> value=-device, 00:01:03.814 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.814 ==> default: -> value=-drive, 00:01:03.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:03.814 ==> default: -> value=-device, 00:01:03.814 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.100 ==> default: Creating shared folders metadata... 00:01:04.100 ==> default: Starting domain. 00:01:06.001 ==> default: Waiting for domain to get an IP address... 00:01:24.083 ==> default: Waiting for SSH to become available... 00:01:24.083 ==> default: Configuring and enabling network interfaces... 00:01:26.622 default: SSH address: 192.168.121.155:22 00:01:26.622 default: SSH username: vagrant 00:01:26.622 default: SSH auth method: private key 00:01:29.176 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:37.285 ==> default: Mounting SSHFS shared folder... 00:01:37.866 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:37.866 ==> default: Checking Mount.. 00:01:39.244 ==> default: Folder Successfully Mounted! 00:01:39.244 ==> default: Running provisioner: file... 00:01:40.180 default: ~/.gitconfig => .gitconfig 00:01:40.438 00:01:40.438 SUCCESS! 00:01:40.438 00:01:40.438 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:40.438 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:40.438 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:40.438 00:01:40.447 [Pipeline] } 00:01:40.462 [Pipeline] // stage 00:01:40.470 [Pipeline] dir 00:01:40.470 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:40.471 [Pipeline] { 00:01:40.480 [Pipeline] catchError 00:01:40.481 [Pipeline] { 00:01:40.489 [Pipeline] sh 00:01:40.772 + vagrant ssh-config --host vagrant 00:01:40.772 + sed -ne /^Host/,$p+ 00:01:40.772 tee ssh_conf 00:01:44.991 Host vagrant 00:01:44.991 HostName 192.168.121.155 00:01:44.991 User vagrant 00:01:44.991 Port 22 00:01:44.991 UserKnownHostsFile /dev/null 00:01:44.991 StrictHostKeyChecking no 00:01:44.991 PasswordAuthentication no 00:01:44.991 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:44.991 IdentitiesOnly yes 00:01:44.991 LogLevel FATAL 00:01:44.991 ForwardAgent yes 00:01:44.991 ForwardX11 yes 00:01:44.991 00:01:45.004 [Pipeline] withEnv 00:01:45.006 [Pipeline] { 00:01:45.022 [Pipeline] sh 00:01:45.299 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:45.299 source /etc/os-release 00:01:45.299 [[ -e /image.version ]] && img=$(< /image.version) 00:01:45.299 # Minimal, systemd-like check. 00:01:45.299 if [[ -e /.dockerenv ]]; then 00:01:45.299 # Clear garbage from the node's name: 00:01:45.299 # agt-er_autotest_547-896 -> autotest_547-896 00:01:45.299 # $HOSTNAME is the actual container id 00:01:45.299 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:45.299 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:45.299 # We can assume this is a mount from a host where container is running, 00:01:45.299 # so fetch its hostname to easily identify the target swarm worker. 00:01:45.299 container="$(< /etc/hostname) ($agent)" 00:01:45.299 else 00:01:45.299 # Fallback 00:01:45.299 container=$agent 00:01:45.299 fi 00:01:45.299 fi 00:01:45.299 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:45.299 00:01:45.569 [Pipeline] } 00:01:45.594 [Pipeline] // withEnv 00:01:45.607 [Pipeline] setCustomBuildProperty 00:01:45.620 [Pipeline] stage 00:01:45.622 [Pipeline] { (Tests) 00:01:45.638 [Pipeline] sh 00:01:45.913 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:46.183 [Pipeline] sh 00:01:46.506 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:46.525 [Pipeline] timeout 00:01:46.526 Timeout set to expire in 30 min 00:01:46.527 [Pipeline] { 00:01:46.541 [Pipeline] sh 00:01:46.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:47.381 HEAD is now at bdeef1ed3 nvmf: add helper function to get a transport poll group 00:01:47.394 [Pipeline] sh 00:01:47.671 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:47.939 [Pipeline] sh 00:01:48.213 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:48.486 [Pipeline] sh 00:01:48.762 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:48.762 ++ readlink -f spdk_repo 00:01:48.762 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:48.762 + [[ -n /home/vagrant/spdk_repo ]] 00:01:48.762 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:48.762 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:48.762 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:48.763 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:48.763 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:48.763 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:48.763 + cd /home/vagrant/spdk_repo 00:01:48.763 + source /etc/os-release 00:01:48.763 ++ NAME='Fedora Linux' 00:01:48.763 ++ VERSION='38 (Cloud Edition)' 00:01:48.763 ++ ID=fedora 00:01:48.763 ++ VERSION_ID=38 00:01:48.763 ++ VERSION_CODENAME= 00:01:48.763 ++ PLATFORM_ID=platform:f38 00:01:48.763 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:48.763 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:48.763 ++ LOGO=fedora-logo-icon 00:01:48.763 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:48.763 ++ HOME_URL=https://fedoraproject.org/ 00:01:48.763 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:48.763 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:48.763 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:48.763 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:48.763 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:48.763 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:48.763 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:48.763 ++ SUPPORT_END=2024-05-14 00:01:48.763 ++ VARIANT='Cloud Edition' 00:01:48.763 ++ VARIANT_ID=cloud 00:01:48.763 + uname -a 00:01:48.763 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:48.763 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:49.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:49.328 Hugepages 00:01:49.328 node hugesize free / total 00:01:49.328 node0 1048576kB 0 / 0 00:01:49.328 node0 2048kB 0 / 0 00:01:49.328 00:01:49.328 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.328 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:49.328 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:49.328 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:49.328 + rm -f /tmp/spdk-ld-path 00:01:49.328 + source autorun-spdk.conf 00:01:49.328 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.328 ++ SPDK_TEST_NVMF=1 00:01:49.328 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.328 ++ SPDK_TEST_URING=1 00:01:49.328 ++ SPDK_TEST_USDT=1 00:01:49.328 ++ SPDK_RUN_UBSAN=1 00:01:49.328 ++ NET_TYPE=virt 00:01:49.328 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.328 ++ RUN_NIGHTLY=0 00:01:49.328 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.328 + [[ -n '' ]] 00:01:49.328 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:49.328 + for M in /var/spdk/build-*-manifest.txt 00:01:49.328 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.329 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:49.329 + for M in /var/spdk/build-*-manifest.txt 00:01:49.329 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.329 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:49.329 ++ uname 00:01:49.329 + [[ Linux == \L\i\n\u\x ]] 00:01:49.329 + sudo dmesg -T 00:01:49.587 + sudo dmesg --clear 00:01:49.587 + dmesg_pid=5107 00:01:49.587 + sudo dmesg -Tw 00:01:49.587 + [[ Fedora Linux == FreeBSD ]] 00:01:49.587 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.587 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.587 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.587 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.587 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.587 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.587 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.587 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.587 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.587 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.587 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.587 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.587 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.587 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.587 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:49.587 Test configuration: 00:01:49.587 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.587 SPDK_TEST_NVMF=1 00:01:49.587 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.587 SPDK_TEST_URING=1 00:01:49.587 SPDK_TEST_USDT=1 00:01:49.587 SPDK_RUN_UBSAN=1 00:01:49.587 NET_TYPE=virt 00:01:49.587 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.587 RUN_NIGHTLY=0 16:16:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:49.587 16:16:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.587 16:16:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.587 16:16:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.587 16:16:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.587 16:16:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.587 16:16:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.587 16:16:34 -- paths/export.sh@5 -- $ export PATH 00:01:49.587 16:16:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.587 16:16:35 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:49.587 16:16:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:49.587 16:16:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721060195.XXXXXX 00:01:49.587 16:16:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721060195.GeJ2Rv 00:01:49.587 16:16:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:49.587 16:16:35 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:49.587 16:16:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:49.587 16:16:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:49.587 16:16:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.587 16:16:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:49.587 16:16:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:49.587 16:16:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.587 16:16:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:49.587 16:16:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:49.587 16:16:35 -- pm/common@17 -- $ local monitor 00:01:49.587 16:16:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.587 16:16:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.587 16:16:35 -- pm/common@21 -- $ date +%s 00:01:49.587 16:16:35 -- pm/common@25 -- $ sleep 1 00:01:49.587 16:16:35 -- pm/common@21 -- $ date +%s 00:01:49.587 16:16:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721060195 00:01:49.587 16:16:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721060195 00:01:49.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721060195_collect-vmstat.pm.log 00:01:49.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721060195_collect-cpu-load.pm.log 00:01:50.518 16:16:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:50.518 16:16:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.518 16:16:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.518 16:16:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:50.518 16:16:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.518 Mon Jul 15 04:16:36 PM UTC 2024 00:01:50.518 16:16:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.518 v24.09-pre-207-gbdeef1ed3 00:01:50.518 16:16:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:50.518 16:16:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.518 16:16:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.519 16:16:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:50.519 16:16:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:50.519 16:16:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.775 ************************************ 00:01:50.775 START TEST ubsan 00:01:50.775 ************************************ 00:01:50.775 using ubsan 00:01:50.775 16:16:36 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:50.775 00:01:50.775 real 0m0.000s 00:01:50.775 user 0m0.000s 00:01:50.775 sys 0m0.000s 00:01:50.775 16:16:36 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:50.775 16:16:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.775 ************************************ 00:01:50.775 END TEST ubsan 00:01:50.775 ************************************ 00:01:50.775 16:16:36 -- common/autotest_common.sh@1142 -- $ return 0 00:01:50.775 16:16:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:50.775 16:16:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:50.775 16:16:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:50.775 16:16:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:50.775 16:16:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:50.775 16:16:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:50.775 16:16:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:50.775 16:16:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:50.775 16:16:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:50.775 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:50.775 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.340 Using 'verbs' RDMA provider 00:02:04.478 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.350 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.350 Creating mk/config.mk...done. 00:02:19.350 Creating mk/cc.flags.mk...done. 00:02:19.350 Type 'make' to build. 00:02:19.350 16:17:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:19.350 16:17:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:19.350 16:17:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:19.350 16:17:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.350 ************************************ 00:02:19.350 START TEST make 00:02:19.350 ************************************ 00:02:19.350 16:17:03 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:19.350 make[1]: Nothing to be done for 'all'. 00:02:29.320 The Meson build system 00:02:29.320 Version: 1.3.1 00:02:29.320 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:29.320 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:29.320 Build type: native build 00:02:29.320 Program cat found: YES (/usr/bin/cat) 00:02:29.320 Project name: DPDK 00:02:29.320 Project version: 24.03.0 00:02:29.320 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:29.320 C linker for the host machine: cc ld.bfd 2.39-16 00:02:29.320 Host machine cpu family: x86_64 00:02:29.320 Host machine cpu: x86_64 00:02:29.320 Message: ## Building in Developer Mode ## 00:02:29.320 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.320 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:29.320 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.320 Program python3 found: YES (/usr/bin/python3) 00:02:29.320 Program cat found: YES (/usr/bin/cat) 00:02:29.320 Compiler for C supports arguments -march=native: YES 00:02:29.320 Checking for size of "void *" : 8 00:02:29.320 Checking for size of "void *" : 8 (cached) 00:02:29.320 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:29.320 Library m found: YES 00:02:29.320 Library numa found: YES 00:02:29.320 Has header "numaif.h" : YES 00:02:29.320 Library fdt found: NO 00:02:29.320 Library execinfo found: NO 00:02:29.320 Has header "execinfo.h" : YES 00:02:29.320 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:29.320 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.320 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.320 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.320 Run-time dependency openssl found: YES 3.0.9 00:02:29.320 Run-time dependency libpcap found: YES 1.10.4 00:02:29.320 Has header "pcap.h" with dependency libpcap: YES 00:02:29.320 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.320 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.320 Compiler for C supports arguments -Wformat: YES 00:02:29.320 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.320 Compiler for C supports arguments -Wformat-security: NO 00:02:29.320 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.320 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.320 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.320 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.320 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.320 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.320 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.320 Compiler for C supports arguments -Wundef: YES 00:02:29.320 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.320 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.320 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.320 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.320 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.320 Program objdump found: YES (/usr/bin/objdump) 00:02:29.320 Compiler for C supports arguments -mavx512f: YES 00:02:29.320 Checking if "AVX512 checking" compiles: YES 00:02:29.320 Fetching value of define "__SSE4_2__" : 1 00:02:29.320 Fetching value of define "__AES__" : 1 00:02:29.320 Fetching value of define "__AVX__" : 1 00:02:29.320 Fetching value of define "__AVX2__" : 1 00:02:29.320 Fetching value of define "__AVX512BW__" : (undefined) 00:02:29.320 Fetching value of define "__AVX512CD__" : (undefined) 00:02:29.320 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:29.320 Fetching value of define "__AVX512F__" : (undefined) 00:02:29.320 Fetching value of define "__AVX512VL__" : (undefined) 00:02:29.320 Fetching value of define "__PCLMUL__" : 1 00:02:29.320 Fetching value of define "__RDRND__" : 1 00:02:29.320 Fetching value of define "__RDSEED__" : 1 00:02:29.320 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:29.320 Fetching value of define "__znver1__" : (undefined) 00:02:29.320 Fetching value of define "__znver2__" : (undefined) 00:02:29.320 Fetching value of define "__znver3__" : (undefined) 00:02:29.320 Fetching value of define "__znver4__" : (undefined) 00:02:29.320 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.320 Message: lib/log: Defining dependency "log" 00:02:29.320 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.320 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.320 Checking for function "getentropy" : NO 00:02:29.320 Message: lib/eal: Defining dependency "eal" 00:02:29.320 Message: lib/ring: Defining dependency "ring" 00:02:29.320 Message: lib/rcu: Defining dependency "rcu" 00:02:29.320 Message: lib/mempool: Defining dependency "mempool" 00:02:29.320 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.320 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.320 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.320 Compiler for C supports arguments -mpclmul: YES 00:02:29.320 Compiler for C supports arguments -maes: YES 00:02:29.320 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.320 Compiler for C supports arguments -mavx512bw: YES 00:02:29.320 Compiler for C supports arguments -mavx512dq: YES 00:02:29.320 Compiler for C supports arguments -mavx512vl: YES 00:02:29.320 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.320 Compiler for C supports arguments -mavx2: YES 00:02:29.320 Compiler for C supports arguments -mavx: YES 00:02:29.320 Message: lib/net: Defining dependency "net" 00:02:29.320 Message: lib/meter: Defining dependency "meter" 00:02:29.320 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.320 Message: lib/pci: Defining dependency "pci" 00:02:29.320 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.320 Message: lib/hash: Defining dependency "hash" 00:02:29.320 Message: lib/timer: Defining dependency "timer" 00:02:29.320 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.320 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.320 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.320 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.320 Message: lib/power: Defining dependency "power" 00:02:29.320 Message: lib/reorder: Defining dependency "reorder" 00:02:29.320 Message: lib/security: Defining dependency "security" 00:02:29.320 Has header "linux/userfaultfd.h" : YES 00:02:29.320 Has header "linux/vduse.h" : YES 00:02:29.320 Message: lib/vhost: Defining dependency "vhost" 00:02:29.320 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.320 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.320 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.320 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.320 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:29.320 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:29.320 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:29.320 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:29.320 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:29.320 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:29.320 Program doxygen found: YES (/usr/bin/doxygen) 00:02:29.320 Configuring doxy-api-html.conf using configuration 00:02:29.320 Configuring doxy-api-man.conf using configuration 00:02:29.320 Program mandb found: YES (/usr/bin/mandb) 00:02:29.320 Program sphinx-build found: NO 00:02:29.320 Configuring rte_build_config.h using configuration 00:02:29.320 Message: 00:02:29.320 ================= 00:02:29.320 Applications Enabled 00:02:29.320 ================= 00:02:29.320 00:02:29.320 apps: 00:02:29.320 00:02:29.320 00:02:29.320 Message: 00:02:29.320 ================= 00:02:29.320 Libraries Enabled 00:02:29.320 ================= 00:02:29.320 00:02:29.321 libs: 00:02:29.321 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:29.321 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:29.321 cryptodev, dmadev, power, reorder, security, vhost, 00:02:29.321 00:02:29.321 Message: 00:02:29.321 =============== 00:02:29.321 Drivers Enabled 00:02:29.321 =============== 00:02:29.321 00:02:29.321 common: 00:02:29.321 00:02:29.321 bus: 00:02:29.321 pci, vdev, 00:02:29.321 mempool: 00:02:29.321 ring, 00:02:29.321 dma: 00:02:29.321 00:02:29.321 net: 00:02:29.321 00:02:29.321 crypto: 00:02:29.321 00:02:29.321 compress: 00:02:29.321 00:02:29.321 vdpa: 00:02:29.321 00:02:29.321 00:02:29.321 Message: 00:02:29.321 ================= 00:02:29.321 Content Skipped 00:02:29.321 ================= 00:02:29.321 00:02:29.321 apps: 00:02:29.321 dumpcap: explicitly disabled via build config 00:02:29.321 graph: explicitly disabled via build config 00:02:29.321 pdump: explicitly disabled via build config 00:02:29.321 proc-info: explicitly disabled via build config 00:02:29.321 test-acl: explicitly disabled via build config 00:02:29.321 test-bbdev: explicitly disabled via build config 00:02:29.321 test-cmdline: explicitly disabled via build config 00:02:29.321 test-compress-perf: explicitly disabled via build config 00:02:29.321 test-crypto-perf: explicitly disabled via build config 00:02:29.321 test-dma-perf: explicitly disabled via build config 00:02:29.321 test-eventdev: explicitly disabled via build config 00:02:29.321 test-fib: explicitly disabled via build config 00:02:29.321 test-flow-perf: explicitly disabled via build config 00:02:29.321 test-gpudev: explicitly disabled via build config 00:02:29.321 test-mldev: explicitly disabled via build config 00:02:29.321 test-pipeline: explicitly disabled via build config 00:02:29.321 test-pmd: explicitly disabled via build config 00:02:29.321 test-regex: explicitly disabled via build config 00:02:29.321 test-sad: explicitly disabled via build config 00:02:29.321 test-security-perf: explicitly disabled via build config 00:02:29.321 00:02:29.321 libs: 00:02:29.321 argparse: explicitly disabled via build config 00:02:29.321 metrics: explicitly disabled via build config 00:02:29.321 acl: explicitly disabled via build config 00:02:29.321 bbdev: explicitly disabled via build config 00:02:29.321 bitratestats: explicitly disabled via build config 00:02:29.321 bpf: explicitly disabled via build config 00:02:29.321 cfgfile: explicitly disabled via build config 00:02:29.321 distributor: explicitly disabled via build config 00:02:29.321 efd: explicitly disabled via build config 00:02:29.321 eventdev: explicitly disabled via build config 00:02:29.321 dispatcher: explicitly disabled via build config 00:02:29.321 gpudev: explicitly disabled via build config 00:02:29.321 gro: explicitly disabled via build config 00:02:29.321 gso: explicitly disabled via build config 00:02:29.321 ip_frag: explicitly disabled via build config 00:02:29.321 jobstats: explicitly disabled via build config 00:02:29.321 latencystats: explicitly disabled via build config 00:02:29.321 lpm: explicitly disabled via build config 00:02:29.321 member: explicitly disabled via build config 00:02:29.321 pcapng: explicitly disabled via build config 00:02:29.321 rawdev: explicitly disabled via build config 00:02:29.321 regexdev: explicitly disabled via build config 00:02:29.321 mldev: explicitly disabled via build config 00:02:29.321 rib: explicitly disabled via build config 00:02:29.321 sched: explicitly disabled via build config 00:02:29.321 stack: explicitly disabled via build config 00:02:29.321 ipsec: explicitly disabled via build config 00:02:29.321 pdcp: explicitly disabled via build config 00:02:29.321 fib: explicitly disabled via build config 00:02:29.321 port: explicitly disabled via build config 00:02:29.321 pdump: explicitly disabled via build config 00:02:29.321 table: explicitly disabled via build config 00:02:29.321 pipeline: explicitly disabled via build config 00:02:29.321 graph: explicitly disabled via build config 00:02:29.321 node: explicitly disabled via build config 00:02:29.321 00:02:29.321 drivers: 00:02:29.321 common/cpt: not in enabled drivers build config 00:02:29.321 common/dpaax: not in enabled drivers build config 00:02:29.321 common/iavf: not in enabled drivers build config 00:02:29.321 common/idpf: not in enabled drivers build config 00:02:29.321 common/ionic: not in enabled drivers build config 00:02:29.321 common/mvep: not in enabled drivers build config 00:02:29.321 common/octeontx: not in enabled drivers build config 00:02:29.321 bus/auxiliary: not in enabled drivers build config 00:02:29.321 bus/cdx: not in enabled drivers build config 00:02:29.321 bus/dpaa: not in enabled drivers build config 00:02:29.321 bus/fslmc: not in enabled drivers build config 00:02:29.321 bus/ifpga: not in enabled drivers build config 00:02:29.321 bus/platform: not in enabled drivers build config 00:02:29.321 bus/uacce: not in enabled drivers build config 00:02:29.321 bus/vmbus: not in enabled drivers build config 00:02:29.321 common/cnxk: not in enabled drivers build config 00:02:29.321 common/mlx5: not in enabled drivers build config 00:02:29.321 common/nfp: not in enabled drivers build config 00:02:29.321 common/nitrox: not in enabled drivers build config 00:02:29.321 common/qat: not in enabled drivers build config 00:02:29.321 common/sfc_efx: not in enabled drivers build config 00:02:29.321 mempool/bucket: not in enabled drivers build config 00:02:29.321 mempool/cnxk: not in enabled drivers build config 00:02:29.321 mempool/dpaa: not in enabled drivers build config 00:02:29.321 mempool/dpaa2: not in enabled drivers build config 00:02:29.321 mempool/octeontx: not in enabled drivers build config 00:02:29.321 mempool/stack: not in enabled drivers build config 00:02:29.321 dma/cnxk: not in enabled drivers build config 00:02:29.321 dma/dpaa: not in enabled drivers build config 00:02:29.321 dma/dpaa2: not in enabled drivers build config 00:02:29.321 dma/hisilicon: not in enabled drivers build config 00:02:29.321 dma/idxd: not in enabled drivers build config 00:02:29.321 dma/ioat: not in enabled drivers build config 00:02:29.321 dma/skeleton: not in enabled drivers build config 00:02:29.321 net/af_packet: not in enabled drivers build config 00:02:29.321 net/af_xdp: not in enabled drivers build config 00:02:29.321 net/ark: not in enabled drivers build config 00:02:29.321 net/atlantic: not in enabled drivers build config 00:02:29.321 net/avp: not in enabled drivers build config 00:02:29.321 net/axgbe: not in enabled drivers build config 00:02:29.321 net/bnx2x: not in enabled drivers build config 00:02:29.321 net/bnxt: not in enabled drivers build config 00:02:29.321 net/bonding: not in enabled drivers build config 00:02:29.321 net/cnxk: not in enabled drivers build config 00:02:29.321 net/cpfl: not in enabled drivers build config 00:02:29.321 net/cxgbe: not in enabled drivers build config 00:02:29.321 net/dpaa: not in enabled drivers build config 00:02:29.321 net/dpaa2: not in enabled drivers build config 00:02:29.321 net/e1000: not in enabled drivers build config 00:02:29.321 net/ena: not in enabled drivers build config 00:02:29.321 net/enetc: not in enabled drivers build config 00:02:29.321 net/enetfec: not in enabled drivers build config 00:02:29.321 net/enic: not in enabled drivers build config 00:02:29.321 net/failsafe: not in enabled drivers build config 00:02:29.321 net/fm10k: not in enabled drivers build config 00:02:29.321 net/gve: not in enabled drivers build config 00:02:29.321 net/hinic: not in enabled drivers build config 00:02:29.321 net/hns3: not in enabled drivers build config 00:02:29.321 net/i40e: not in enabled drivers build config 00:02:29.321 net/iavf: not in enabled drivers build config 00:02:29.321 net/ice: not in enabled drivers build config 00:02:29.321 net/idpf: not in enabled drivers build config 00:02:29.321 net/igc: not in enabled drivers build config 00:02:29.321 net/ionic: not in enabled drivers build config 00:02:29.321 net/ipn3ke: not in enabled drivers build config 00:02:29.321 net/ixgbe: not in enabled drivers build config 00:02:29.321 net/mana: not in enabled drivers build config 00:02:29.321 net/memif: not in enabled drivers build config 00:02:29.321 net/mlx4: not in enabled drivers build config 00:02:29.321 net/mlx5: not in enabled drivers build config 00:02:29.321 net/mvneta: not in enabled drivers build config 00:02:29.321 net/mvpp2: not in enabled drivers build config 00:02:29.321 net/netvsc: not in enabled drivers build config 00:02:29.321 net/nfb: not in enabled drivers build config 00:02:29.321 net/nfp: not in enabled drivers build config 00:02:29.321 net/ngbe: not in enabled drivers build config 00:02:29.321 net/null: not in enabled drivers build config 00:02:29.321 net/octeontx: not in enabled drivers build config 00:02:29.321 net/octeon_ep: not in enabled drivers build config 00:02:29.321 net/pcap: not in enabled drivers build config 00:02:29.321 net/pfe: not in enabled drivers build config 00:02:29.321 net/qede: not in enabled drivers build config 00:02:29.321 net/ring: not in enabled drivers build config 00:02:29.321 net/sfc: not in enabled drivers build config 00:02:29.321 net/softnic: not in enabled drivers build config 00:02:29.321 net/tap: not in enabled drivers build config 00:02:29.321 net/thunderx: not in enabled drivers build config 00:02:29.321 net/txgbe: not in enabled drivers build config 00:02:29.321 net/vdev_netvsc: not in enabled drivers build config 00:02:29.321 net/vhost: not in enabled drivers build config 00:02:29.321 net/virtio: not in enabled drivers build config 00:02:29.321 net/vmxnet3: not in enabled drivers build config 00:02:29.321 raw/*: missing internal dependency, "rawdev" 00:02:29.321 crypto/armv8: not in enabled drivers build config 00:02:29.321 crypto/bcmfs: not in enabled drivers build config 00:02:29.321 crypto/caam_jr: not in enabled drivers build config 00:02:29.321 crypto/ccp: not in enabled drivers build config 00:02:29.321 crypto/cnxk: not in enabled drivers build config 00:02:29.321 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.321 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.321 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.321 crypto/mlx5: not in enabled drivers build config 00:02:29.321 crypto/mvsam: not in enabled drivers build config 00:02:29.321 crypto/nitrox: not in enabled drivers build config 00:02:29.321 crypto/null: not in enabled drivers build config 00:02:29.321 crypto/octeontx: not in enabled drivers build config 00:02:29.321 crypto/openssl: not in enabled drivers build config 00:02:29.321 crypto/scheduler: not in enabled drivers build config 00:02:29.321 crypto/uadk: not in enabled drivers build config 00:02:29.321 crypto/virtio: not in enabled drivers build config 00:02:29.321 compress/isal: not in enabled drivers build config 00:02:29.321 compress/mlx5: not in enabled drivers build config 00:02:29.321 compress/nitrox: not in enabled drivers build config 00:02:29.321 compress/octeontx: not in enabled drivers build config 00:02:29.321 compress/zlib: not in enabled drivers build config 00:02:29.321 regex/*: missing internal dependency, "regexdev" 00:02:29.321 ml/*: missing internal dependency, "mldev" 00:02:29.321 vdpa/ifc: not in enabled drivers build config 00:02:29.321 vdpa/mlx5: not in enabled drivers build config 00:02:29.322 vdpa/nfp: not in enabled drivers build config 00:02:29.322 vdpa/sfc: not in enabled drivers build config 00:02:29.322 event/*: missing internal dependency, "eventdev" 00:02:29.322 baseband/*: missing internal dependency, "bbdev" 00:02:29.322 gpu/*: missing internal dependency, "gpudev" 00:02:29.322 00:02:29.322 00:02:29.322 Build targets in project: 85 00:02:29.322 00:02:29.322 DPDK 24.03.0 00:02:29.322 00:02:29.322 User defined options 00:02:29.322 buildtype : debug 00:02:29.322 default_library : shared 00:02:29.322 libdir : lib 00:02:29.322 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:29.322 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:29.322 c_link_args : 00:02:29.322 cpu_instruction_set: native 00:02:29.322 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:29.322 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:29.322 enable_docs : false 00:02:29.322 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:29.322 enable_kmods : false 00:02:29.322 max_lcores : 128 00:02:29.322 tests : false 00:02:29.322 00:02:29.322 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.322 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:29.322 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.322 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.322 [3/268] Linking static target lib/librte_kvargs.a 00:02:29.322 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.322 [5/268] Linking static target lib/librte_log.a 00:02:29.322 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.888 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.888 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.888 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.888 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.888 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.145 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:30.145 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.145 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.145 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.403 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.403 [17/268] Linking static target lib/librte_telemetry.a 00:02:30.403 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.403 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.403 [20/268] Linking target lib/librte_log.so.24.1 00:02:30.660 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.660 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:30.918 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.918 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.918 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.918 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.918 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:31.176 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:31.176 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:31.176 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:31.176 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.176 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.434 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:31.434 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.434 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.434 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.692 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.963 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.963 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.963 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.963 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.963 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.963 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:32.253 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.253 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:32.253 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:32.253 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.253 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.511 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:32.511 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.511 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:32.770 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.770 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:33.028 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:33.028 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:33.028 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.285 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.285 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.285 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.285 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:33.543 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:33.543 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.801 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.801 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.801 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.801 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:34.059 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:34.059 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:34.317 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:34.317 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:34.317 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:34.575 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.575 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:34.575 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.575 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.575 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.575 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.898 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.898 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.172 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.172 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.172 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.172 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.430 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:35.430 [85/268] Linking static target lib/librte_eal.a 00:02:35.689 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.689 [87/268] Linking static target lib/librte_ring.a 00:02:35.689 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:35.689 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.689 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.689 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.689 [92/268] Linking static target lib/librte_rcu.a 00:02:35.947 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.947 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.205 [95/268] Linking static target lib/librte_mempool.a 00:02:36.205 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.205 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:36.205 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:36.205 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:36.462 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:36.462 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.462 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.462 [103/268] Linking static target lib/librte_mbuf.a 00:02:36.718 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.718 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:36.976 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:37.234 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:37.234 [108/268] Linking static target lib/librte_meter.a 00:02:37.234 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:37.234 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:37.234 [111/268] Linking static target lib/librte_net.a 00:02:37.491 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.491 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:37.780 [114/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.780 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.110 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.110 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.110 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.369 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.627 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.627 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.627 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.886 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.886 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.886 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.142 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.142 [127/268] Linking static target lib/librte_pci.a 00:02:39.142 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:39.142 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.142 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.142 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.142 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.142 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.142 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.400 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.400 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.400 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.400 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.400 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.400 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.400 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.400 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.400 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.400 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.659 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.659 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.917 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.917 [148/268] Linking static target lib/librte_cmdline.a 00:02:40.174 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:40.174 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.174 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.174 [152/268] Linking static target lib/librte_ethdev.a 00:02:40.432 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.432 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.432 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.432 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.432 [157/268] Linking static target lib/librte_timer.a 00:02:40.691 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:40.691 [159/268] Linking static target lib/librte_hash.a 00:02:40.691 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.951 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.951 [162/268] Linking static target lib/librte_compressdev.a 00:02:41.209 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.209 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.209 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.209 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.209 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.209 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.209 [169/268] Linking static target lib/librte_dmadev.a 00:02:41.479 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.479 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.737 [172/268] Linking static target lib/librte_cryptodev.a 00:02:41.737 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:41.737 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.737 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.737 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.995 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:41.995 [178/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:42.253 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:42.253 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.253 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.253 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:42.253 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.253 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.512 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.512 [186/268] Linking static target lib/librte_power.a 00:02:43.081 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.081 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.081 [189/268] Linking static target lib/librte_security.a 00:02:43.081 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.081 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:43.081 [192/268] Linking static target lib/librte_reorder.a 00:02:43.081 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.340 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.599 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.858 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.858 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.858 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.858 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:43.858 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.117 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.376 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.376 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.376 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.376 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.635 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.635 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.635 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:44.635 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.635 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.893 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.893 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.893 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.893 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.893 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.893 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:45.152 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.152 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.152 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.152 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:45.152 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.152 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.152 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.414 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.414 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.414 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.414 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:45.414 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.980 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.980 [230/268] Linking static target lib/librte_vhost.a 00:02:46.914 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.914 [232/268] Linking target lib/librte_eal.so.24.1 00:02:46.914 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:46.914 [234/268] Linking target lib/librte_timer.so.24.1 00:02:46.914 [235/268] Linking target lib/librte_ring.so.24.1 00:02:46.914 [236/268] Linking target lib/librte_meter.so.24.1 00:02:46.914 [237/268] Linking target lib/librte_pci.so.24.1 00:02:46.914 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:46.914 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:47.171 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.171 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.171 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.171 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.171 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.171 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:47.171 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:47.171 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.428 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:47.428 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:47.428 [250/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.428 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:47.428 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:47.686 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:47.686 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:47.686 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:47.686 [256/268] Linking target lib/librte_net.so.24.1 00:02:47.686 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:47.686 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:47.686 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:47.686 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.686 [261/268] Linking target lib/librte_security.so.24.1 00:02:47.943 [262/268] Linking target lib/librte_hash.so.24.1 00:02:47.943 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:47.943 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:47.943 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:47.943 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:47.943 [267/268] Linking target lib/librte_power.so.24.1 00:02:48.201 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:48.201 INFO: autodetecting backend as ninja 00:02:48.201 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:49.136 CC lib/ut/ut.o 00:02:49.136 CC lib/ut_mock/mock.o 00:02:49.136 CC lib/log/log_deprecated.o 00:02:49.136 CC lib/log/log.o 00:02:49.136 CC lib/log/log_flags.o 00:02:49.394 LIB libspdk_ut.a 00:02:49.394 LIB libspdk_ut_mock.a 00:02:49.394 SO libspdk_ut.so.2.0 00:02:49.394 LIB libspdk_log.a 00:02:49.394 SO libspdk_ut_mock.so.6.0 00:02:49.652 SO libspdk_log.so.7.0 00:02:49.652 SYMLINK libspdk_ut.so 00:02:49.652 SYMLINK libspdk_ut_mock.so 00:02:49.652 SYMLINK libspdk_log.so 00:02:49.910 CC lib/dma/dma.o 00:02:49.910 CC lib/ioat/ioat.o 00:02:49.910 CC lib/util/base64.o 00:02:49.910 CC lib/util/bit_array.o 00:02:49.910 CC lib/util/cpuset.o 00:02:49.910 CXX lib/trace_parser/trace.o 00:02:49.910 CC lib/util/crc16.o 00:02:49.910 CC lib/util/crc32.o 00:02:49.910 CC lib/util/crc32c.o 00:02:49.910 CC lib/vfio_user/host/vfio_user_pci.o 00:02:49.910 CC lib/util/crc32_ieee.o 00:02:49.910 CC lib/util/crc64.o 00:02:49.910 CC lib/util/dif.o 00:02:49.910 CC lib/util/fd.o 00:02:49.910 LIB libspdk_dma.a 00:02:50.167 CC lib/util/file.o 00:02:50.167 SO libspdk_dma.so.4.0 00:02:50.167 CC lib/vfio_user/host/vfio_user.o 00:02:50.167 CC lib/util/hexlify.o 00:02:50.167 CC lib/util/iov.o 00:02:50.167 SYMLINK libspdk_dma.so 00:02:50.167 CC lib/util/math.o 00:02:50.167 CC lib/util/pipe.o 00:02:50.167 LIB libspdk_ioat.a 00:02:50.167 CC lib/util/strerror_tls.o 00:02:50.167 SO libspdk_ioat.so.7.0 00:02:50.167 SYMLINK libspdk_ioat.so 00:02:50.167 CC lib/util/string.o 00:02:50.167 CC lib/util/uuid.o 00:02:50.167 CC lib/util/fd_group.o 00:02:50.167 CC lib/util/xor.o 00:02:50.167 LIB libspdk_vfio_user.a 00:02:50.167 CC lib/util/zipf.o 00:02:50.424 SO libspdk_vfio_user.so.5.0 00:02:50.424 SYMLINK libspdk_vfio_user.so 00:02:50.424 LIB libspdk_util.a 00:02:50.682 SO libspdk_util.so.9.1 00:02:50.682 LIB libspdk_trace_parser.a 00:02:50.682 SO libspdk_trace_parser.so.5.0 00:02:50.682 SYMLINK libspdk_util.so 00:02:50.940 SYMLINK libspdk_trace_parser.so 00:02:50.940 CC lib/conf/conf.o 00:02:50.940 CC lib/env_dpdk/env.o 00:02:50.940 CC lib/env_dpdk/memory.o 00:02:50.940 CC lib/env_dpdk/pci.o 00:02:50.940 CC lib/env_dpdk/init.o 00:02:50.940 CC lib/rdma_provider/common.o 00:02:50.940 CC lib/idxd/idxd.o 00:02:50.940 CC lib/json/json_parse.o 00:02:50.940 CC lib/vmd/vmd.o 00:02:50.940 CC lib/rdma_utils/rdma_utils.o 00:02:51.198 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.198 LIB libspdk_conf.a 00:02:51.198 CC lib/json/json_util.o 00:02:51.198 SO libspdk_conf.so.6.0 00:02:51.198 LIB libspdk_rdma_utils.a 00:02:51.198 SO libspdk_rdma_utils.so.1.0 00:02:51.198 SYMLINK libspdk_conf.so 00:02:51.456 CC lib/idxd/idxd_user.o 00:02:51.456 CC lib/json/json_write.o 00:02:51.456 SYMLINK libspdk_rdma_utils.so 00:02:51.456 CC lib/idxd/idxd_kernel.o 00:02:51.456 LIB libspdk_rdma_provider.a 00:02:51.456 SO libspdk_rdma_provider.so.6.0 00:02:51.456 CC lib/vmd/led.o 00:02:51.456 SYMLINK libspdk_rdma_provider.so 00:02:51.456 CC lib/env_dpdk/threads.o 00:02:51.456 CC lib/env_dpdk/pci_ioat.o 00:02:51.456 CC lib/env_dpdk/pci_virtio.o 00:02:51.456 CC lib/env_dpdk/pci_vmd.o 00:02:51.456 CC lib/env_dpdk/pci_idxd.o 00:02:51.713 CC lib/env_dpdk/pci_event.o 00:02:51.713 LIB libspdk_idxd.a 00:02:51.713 CC lib/env_dpdk/sigbus_handler.o 00:02:51.713 LIB libspdk_vmd.a 00:02:51.713 LIB libspdk_json.a 00:02:51.713 SO libspdk_idxd.so.12.0 00:02:51.713 SO libspdk_vmd.so.6.0 00:02:51.713 SO libspdk_json.so.6.0 00:02:51.713 SYMLINK libspdk_idxd.so 00:02:51.713 CC lib/env_dpdk/pci_dpdk.o 00:02:51.713 SYMLINK libspdk_vmd.so 00:02:51.713 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.713 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.713 SYMLINK libspdk_json.so 00:02:51.970 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.970 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.970 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.970 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.227 LIB libspdk_jsonrpc.a 00:02:52.227 SO libspdk_jsonrpc.so.6.0 00:02:52.227 SYMLINK libspdk_jsonrpc.so 00:02:52.486 LIB libspdk_env_dpdk.a 00:02:52.486 SO libspdk_env_dpdk.so.14.1 00:02:52.486 CC lib/rpc/rpc.o 00:02:52.486 SYMLINK libspdk_env_dpdk.so 00:02:52.744 LIB libspdk_rpc.a 00:02:52.744 SO libspdk_rpc.so.6.0 00:02:53.003 SYMLINK libspdk_rpc.so 00:02:53.003 CC lib/trace/trace_flags.o 00:02:53.003 CC lib/trace/trace_rpc.o 00:02:53.003 CC lib/trace/trace.o 00:02:53.003 CC lib/keyring/keyring_rpc.o 00:02:53.003 CC lib/keyring/keyring.o 00:02:53.003 CC lib/notify/notify.o 00:02:53.003 CC lib/notify/notify_rpc.o 00:02:53.262 LIB libspdk_notify.a 00:02:53.262 SO libspdk_notify.so.6.0 00:02:53.262 LIB libspdk_trace.a 00:02:53.519 SO libspdk_trace.so.10.0 00:02:53.519 LIB libspdk_keyring.a 00:02:53.519 SYMLINK libspdk_notify.so 00:02:53.519 SO libspdk_keyring.so.1.0 00:02:53.519 SYMLINK libspdk_trace.so 00:02:53.519 SYMLINK libspdk_keyring.so 00:02:53.777 CC lib/sock/sock.o 00:02:53.777 CC lib/sock/sock_rpc.o 00:02:53.777 CC lib/thread/iobuf.o 00:02:53.777 CC lib/thread/thread.o 00:02:54.034 LIB libspdk_sock.a 00:02:54.034 SO libspdk_sock.so.10.0 00:02:54.291 SYMLINK libspdk_sock.so 00:02:54.548 CC lib/nvme/nvme_ctrlr.o 00:02:54.549 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.549 CC lib/nvme/nvme_fabric.o 00:02:54.549 CC lib/nvme/nvme_ns_cmd.o 00:02:54.549 CC lib/nvme/nvme_ns.o 00:02:54.549 CC lib/nvme/nvme_pcie_common.o 00:02:54.549 CC lib/nvme/nvme_pcie.o 00:02:54.549 CC lib/nvme/nvme_qpair.o 00:02:54.549 CC lib/nvme/nvme.o 00:02:55.483 LIB libspdk_thread.a 00:02:55.483 CC lib/nvme/nvme_quirks.o 00:02:55.483 CC lib/nvme/nvme_transport.o 00:02:55.483 SO libspdk_thread.so.10.1 00:02:55.483 CC lib/nvme/nvme_discovery.o 00:02:55.483 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.483 SYMLINK libspdk_thread.so 00:02:55.483 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.483 CC lib/nvme/nvme_tcp.o 00:02:55.483 CC lib/nvme/nvme_opal.o 00:02:55.742 CC lib/nvme/nvme_io_msg.o 00:02:55.742 CC lib/accel/accel.o 00:02:56.001 CC lib/accel/accel_rpc.o 00:02:56.001 CC lib/accel/accel_sw.o 00:02:56.001 CC lib/nvme/nvme_poll_group.o 00:02:56.001 CC lib/nvme/nvme_zns.o 00:02:56.001 CC lib/nvme/nvme_stubs.o 00:02:56.001 CC lib/blob/blobstore.o 00:02:56.261 CC lib/init/json_config.o 00:02:56.261 CC lib/nvme/nvme_auth.o 00:02:56.519 CC lib/virtio/virtio.o 00:02:56.519 CC lib/init/subsystem.o 00:02:56.519 LIB libspdk_accel.a 00:02:56.519 CC lib/nvme/nvme_cuse.o 00:02:56.519 SO libspdk_accel.so.15.1 00:02:56.778 CC lib/nvme/nvme_rdma.o 00:02:56.778 CC lib/init/subsystem_rpc.o 00:02:56.778 CC lib/init/rpc.o 00:02:56.778 CC lib/virtio/virtio_vhost_user.o 00:02:56.778 SYMLINK libspdk_accel.so 00:02:56.778 CC lib/virtio/virtio_vfio_user.o 00:02:56.778 CC lib/blob/request.o 00:02:56.778 LIB libspdk_init.a 00:02:56.778 CC lib/bdev/bdev.o 00:02:57.036 SO libspdk_init.so.5.0 00:02:57.036 CC lib/bdev/bdev_rpc.o 00:02:57.036 SYMLINK libspdk_init.so 00:02:57.036 CC lib/bdev/bdev_zone.o 00:02:57.036 CC lib/bdev/part.o 00:02:57.036 CC lib/virtio/virtio_pci.o 00:02:57.294 CC lib/blob/zeroes.o 00:02:57.294 CC lib/bdev/scsi_nvme.o 00:02:57.294 CC lib/blob/blob_bs_dev.o 00:02:57.294 LIB libspdk_virtio.a 00:02:57.294 SO libspdk_virtio.so.7.0 00:02:57.552 SYMLINK libspdk_virtio.so 00:02:57.552 CC lib/event/app.o 00:02:57.552 CC lib/event/reactor.o 00:02:57.552 CC lib/event/app_rpc.o 00:02:57.552 CC lib/event/log_rpc.o 00:02:57.552 CC lib/event/scheduler_static.o 00:02:57.824 LIB libspdk_event.a 00:02:57.824 SO libspdk_event.so.14.0 00:02:58.086 SYMLINK libspdk_event.so 00:02:58.086 LIB libspdk_nvme.a 00:02:58.344 SO libspdk_nvme.so.13.1 00:02:58.602 SYMLINK libspdk_nvme.so 00:02:59.536 LIB libspdk_blob.a 00:02:59.536 SO libspdk_blob.so.11.0 00:02:59.536 LIB libspdk_bdev.a 00:02:59.536 SO libspdk_bdev.so.15.1 00:02:59.536 SYMLINK libspdk_blob.so 00:02:59.536 SYMLINK libspdk_bdev.so 00:02:59.794 CC lib/blobfs/blobfs.o 00:02:59.794 CC lib/blobfs/tree.o 00:02:59.794 CC lib/lvol/lvol.o 00:02:59.794 CC lib/nvmf/ctrlr.o 00:02:59.794 CC lib/nvmf/ctrlr_discovery.o 00:02:59.794 CC lib/nvmf/ctrlr_bdev.o 00:02:59.794 CC lib/nbd/nbd.o 00:02:59.794 CC lib/ublk/ublk.o 00:02:59.794 CC lib/ftl/ftl_core.o 00:02:59.794 CC lib/scsi/dev.o 00:03:00.052 CC lib/scsi/lun.o 00:03:00.053 CC lib/nbd/nbd_rpc.o 00:03:00.311 CC lib/ftl/ftl_init.o 00:03:00.311 CC lib/ftl/ftl_layout.o 00:03:00.311 CC lib/scsi/port.o 00:03:00.311 LIB libspdk_nbd.a 00:03:00.311 SO libspdk_nbd.so.7.0 00:03:00.311 CC lib/nvmf/subsystem.o 00:03:00.569 SYMLINK libspdk_nbd.so 00:03:00.569 CC lib/ublk/ublk_rpc.o 00:03:00.569 CC lib/nvmf/nvmf.o 00:03:00.569 CC lib/scsi/scsi.o 00:03:00.569 CC lib/nvmf/nvmf_rpc.o 00:03:00.569 CC lib/nvmf/transport.o 00:03:00.569 CC lib/ftl/ftl_debug.o 00:03:00.569 LIB libspdk_ublk.a 00:03:00.569 LIB libspdk_blobfs.a 00:03:00.569 SO libspdk_ublk.so.3.0 00:03:00.569 SO libspdk_blobfs.so.10.0 00:03:00.569 CC lib/scsi/scsi_bdev.o 00:03:00.569 LIB libspdk_lvol.a 00:03:00.828 SO libspdk_lvol.so.10.0 00:03:00.828 SYMLINK libspdk_ublk.so 00:03:00.828 CC lib/nvmf/tcp.o 00:03:00.828 SYMLINK libspdk_lvol.so 00:03:00.828 CC lib/scsi/scsi_pr.o 00:03:00.828 SYMLINK libspdk_blobfs.so 00:03:00.828 CC lib/nvmf/stubs.o 00:03:00.828 CC lib/ftl/ftl_io.o 00:03:01.086 CC lib/ftl/ftl_sb.o 00:03:01.086 CC lib/scsi/scsi_rpc.o 00:03:01.086 CC lib/scsi/task.o 00:03:01.345 CC lib/ftl/ftl_l2p.o 00:03:01.345 CC lib/ftl/ftl_l2p_flat.o 00:03:01.345 CC lib/ftl/ftl_nv_cache.o 00:03:01.345 CC lib/nvmf/mdns_server.o 00:03:01.345 CC lib/nvmf/rdma.o 00:03:01.345 CC lib/nvmf/auth.o 00:03:01.345 LIB libspdk_scsi.a 00:03:01.603 CC lib/ftl/ftl_band.o 00:03:01.603 CC lib/ftl/ftl_band_ops.o 00:03:01.603 SO libspdk_scsi.so.9.0 00:03:01.603 CC lib/ftl/ftl_writer.o 00:03:01.603 SYMLINK libspdk_scsi.so 00:03:01.603 CC lib/ftl/ftl_rq.o 00:03:01.862 CC lib/ftl/ftl_reloc.o 00:03:01.862 CC lib/ftl/ftl_l2p_cache.o 00:03:01.862 CC lib/ftl/ftl_p2l.o 00:03:01.862 CC lib/ftl/mngt/ftl_mngt.o 00:03:01.862 CC lib/iscsi/conn.o 00:03:01.862 CC lib/vhost/vhost.o 00:03:02.118 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.118 CC lib/iscsi/init_grp.o 00:03:02.118 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.118 CC lib/vhost/vhost_rpc.o 00:03:02.376 CC lib/vhost/vhost_scsi.o 00:03:02.376 CC lib/vhost/vhost_blk.o 00:03:02.376 CC lib/vhost/rte_vhost_user.o 00:03:02.376 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.376 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.376 CC lib/iscsi/iscsi.o 00:03:02.376 CC lib/iscsi/md5.o 00:03:02.634 CC lib/iscsi/param.o 00:03:02.634 CC lib/iscsi/portal_grp.o 00:03:02.634 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.634 CC lib/iscsi/tgt_node.o 00:03:02.892 CC lib/iscsi/iscsi_subsystem.o 00:03:02.892 CC lib/iscsi/iscsi_rpc.o 00:03:02.892 CC lib/iscsi/task.o 00:03:02.892 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.150 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.150 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.150 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.150 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.150 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.150 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.408 CC lib/ftl/utils/ftl_conf.o 00:03:03.408 LIB libspdk_nvmf.a 00:03:03.408 CC lib/ftl/utils/ftl_md.o 00:03:03.408 CC lib/ftl/utils/ftl_mempool.o 00:03:03.408 LIB libspdk_vhost.a 00:03:03.408 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.408 CC lib/ftl/utils/ftl_property.o 00:03:03.408 SO libspdk_vhost.so.8.0 00:03:03.408 SO libspdk_nvmf.so.18.1 00:03:03.408 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.666 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.666 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.666 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.666 SYMLINK libspdk_vhost.so 00:03:03.666 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.666 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.666 SYMLINK libspdk_nvmf.so 00:03:03.666 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.666 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.666 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.666 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.666 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.666 CC lib/ftl/base/ftl_base_dev.o 00:03:03.666 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.924 CC lib/ftl/ftl_trace.o 00:03:03.924 LIB libspdk_iscsi.a 00:03:03.924 SO libspdk_iscsi.so.8.0 00:03:04.181 LIB libspdk_ftl.a 00:03:04.181 SYMLINK libspdk_iscsi.so 00:03:04.455 SO libspdk_ftl.so.9.0 00:03:04.720 SYMLINK libspdk_ftl.so 00:03:04.979 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.237 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.237 CC module/keyring/linux/keyring.o 00:03:05.237 CC module/keyring/file/keyring.o 00:03:05.237 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.237 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.237 CC module/accel/ioat/accel_ioat.o 00:03:05.237 CC module/sock/posix/posix.o 00:03:05.237 CC module/blob/bdev/blob_bdev.o 00:03:05.237 CC module/accel/error/accel_error.o 00:03:05.237 LIB libspdk_env_dpdk_rpc.a 00:03:05.237 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.237 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.237 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.237 LIB libspdk_scheduler_gscheduler.a 00:03:05.237 CC module/keyring/file/keyring_rpc.o 00:03:05.237 CC module/keyring/linux/keyring_rpc.o 00:03:05.237 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.237 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.498 LIB libspdk_scheduler_dynamic.a 00:03:05.498 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.498 CC module/accel/error/accel_error_rpc.o 00:03:05.498 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.498 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.498 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.498 LIB libspdk_blob_bdev.a 00:03:05.498 LIB libspdk_keyring_linux.a 00:03:05.498 LIB libspdk_keyring_file.a 00:03:05.498 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.498 CC module/sock/uring/uring.o 00:03:05.498 SO libspdk_blob_bdev.so.11.0 00:03:05.498 SO libspdk_keyring_file.so.1.0 00:03:05.498 SO libspdk_keyring_linux.so.1.0 00:03:05.498 LIB libspdk_accel_ioat.a 00:03:05.498 LIB libspdk_accel_error.a 00:03:05.498 SYMLINK libspdk_keyring_file.so 00:03:05.498 SYMLINK libspdk_keyring_linux.so 00:03:05.498 SO libspdk_accel_ioat.so.6.0 00:03:05.498 SYMLINK libspdk_blob_bdev.so 00:03:05.498 SO libspdk_accel_error.so.2.0 00:03:05.498 CC module/accel/dsa/accel_dsa.o 00:03:05.498 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.498 CC module/accel/iaa/accel_iaa.o 00:03:05.498 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.757 SYMLINK libspdk_accel_ioat.so 00:03:05.757 SYMLINK libspdk_accel_error.so 00:03:05.757 CC module/bdev/gpt/gpt.o 00:03:05.757 CC module/bdev/delay/vbdev_delay.o 00:03:05.757 CC module/bdev/error/vbdev_error.o 00:03:05.757 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.757 LIB libspdk_accel_iaa.a 00:03:05.757 LIB libspdk_accel_dsa.a 00:03:06.014 SO libspdk_accel_iaa.so.3.0 00:03:06.014 LIB libspdk_sock_posix.a 00:03:06.014 SO libspdk_accel_dsa.so.5.0 00:03:06.014 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.014 SO libspdk_sock_posix.so.6.0 00:03:06.014 CC module/bdev/malloc/bdev_malloc.o 00:03:06.014 SYMLINK libspdk_accel_iaa.so 00:03:06.014 SYMLINK libspdk_accel_dsa.so 00:03:06.014 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.014 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.014 SYMLINK libspdk_sock_posix.so 00:03:06.014 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.014 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.014 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.272 LIB libspdk_blobfs_bdev.a 00:03:06.272 LIB libspdk_sock_uring.a 00:03:06.272 LIB libspdk_bdev_error.a 00:03:06.272 SO libspdk_blobfs_bdev.so.6.0 00:03:06.272 CC module/bdev/null/bdev_null.o 00:03:06.272 SO libspdk_sock_uring.so.5.0 00:03:06.272 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.272 SO libspdk_bdev_error.so.6.0 00:03:06.272 LIB libspdk_bdev_gpt.a 00:03:06.272 SYMLINK libspdk_blobfs_bdev.so 00:03:06.272 SYMLINK libspdk_sock_uring.so 00:03:06.272 SO libspdk_bdev_gpt.so.6.0 00:03:06.272 SYMLINK libspdk_bdev_error.so 00:03:06.272 LIB libspdk_bdev_malloc.a 00:03:06.272 SO libspdk_bdev_malloc.so.6.0 00:03:06.272 CC module/bdev/nvme/bdev_nvme.o 00:03:06.272 SYMLINK libspdk_bdev_gpt.so 00:03:06.272 LIB libspdk_bdev_delay.a 00:03:06.530 SYMLINK libspdk_bdev_malloc.so 00:03:06.530 SO libspdk_bdev_delay.so.6.0 00:03:06.530 CC module/bdev/null/bdev_null_rpc.o 00:03:06.530 CC module/bdev/raid/bdev_raid.o 00:03:06.530 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.530 CC module/bdev/split/vbdev_split.o 00:03:06.530 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.530 LIB libspdk_bdev_lvol.a 00:03:06.530 SYMLINK libspdk_bdev_delay.so 00:03:06.530 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.530 SO libspdk_bdev_lvol.so.6.0 00:03:06.530 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.530 CC module/bdev/uring/bdev_uring.o 00:03:06.530 SYMLINK libspdk_bdev_lvol.so 00:03:06.530 LIB libspdk_bdev_null.a 00:03:06.788 SO libspdk_bdev_null.so.6.0 00:03:06.788 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.788 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.788 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.788 SYMLINK libspdk_bdev_null.so 00:03:06.788 CC module/bdev/raid/raid0.o 00:03:06.788 CC module/bdev/aio/bdev_aio.o 00:03:06.788 LIB libspdk_bdev_passthru.a 00:03:06.788 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.788 LIB libspdk_bdev_split.a 00:03:06.788 SO libspdk_bdev_passthru.so.6.0 00:03:07.046 SO libspdk_bdev_split.so.6.0 00:03:07.046 SYMLINK libspdk_bdev_passthru.so 00:03:07.046 CC module/bdev/uring/bdev_uring_rpc.o 00:03:07.046 CC module/bdev/nvme/nvme_rpc.o 00:03:07.046 SYMLINK libspdk_bdev_split.so 00:03:07.046 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.046 CC module/bdev/raid/raid1.o 00:03:07.046 CC module/bdev/raid/concat.o 00:03:07.046 LIB libspdk_bdev_zone_block.a 00:03:07.046 SO libspdk_bdev_zone_block.so.6.0 00:03:07.046 SYMLINK libspdk_bdev_zone_block.so 00:03:07.046 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.304 CC module/bdev/nvme/vbdev_opal.o 00:03:07.304 LIB libspdk_bdev_uring.a 00:03:07.304 SO libspdk_bdev_uring.so.6.0 00:03:07.304 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.304 SYMLINK libspdk_bdev_uring.so 00:03:07.304 CC module/bdev/ftl/bdev_ftl.o 00:03:07.304 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.304 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.304 LIB libspdk_bdev_aio.a 00:03:07.304 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.304 SO libspdk_bdev_aio.so.6.0 00:03:07.304 LIB libspdk_bdev_raid.a 00:03:07.563 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.563 SYMLINK libspdk_bdev_aio.so 00:03:07.563 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.563 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.563 SO libspdk_bdev_raid.so.6.0 00:03:07.563 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.563 SYMLINK libspdk_bdev_raid.so 00:03:07.563 LIB libspdk_bdev_ftl.a 00:03:07.563 SO libspdk_bdev_ftl.so.6.0 00:03:07.821 SYMLINK libspdk_bdev_ftl.so 00:03:07.821 LIB libspdk_bdev_iscsi.a 00:03:07.821 SO libspdk_bdev_iscsi.so.6.0 00:03:07.821 SYMLINK libspdk_bdev_iscsi.so 00:03:08.079 LIB libspdk_bdev_virtio.a 00:03:08.079 SO libspdk_bdev_virtio.so.6.0 00:03:08.079 SYMLINK libspdk_bdev_virtio.so 00:03:08.645 LIB libspdk_bdev_nvme.a 00:03:08.903 SO libspdk_bdev_nvme.so.7.0 00:03:08.903 SYMLINK libspdk_bdev_nvme.so 00:03:09.470 CC module/event/subsystems/vmd/vmd.o 00:03:09.470 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.470 CC module/event/subsystems/sock/sock.o 00:03:09.470 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.470 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.470 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.470 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.470 CC module/event/subsystems/keyring/keyring.o 00:03:09.470 LIB libspdk_event_scheduler.a 00:03:09.470 LIB libspdk_event_keyring.a 00:03:09.470 LIB libspdk_event_vhost_blk.a 00:03:09.470 LIB libspdk_event_vmd.a 00:03:09.728 LIB libspdk_event_sock.a 00:03:09.728 SO libspdk_event_scheduler.so.4.0 00:03:09.728 LIB libspdk_event_iobuf.a 00:03:09.728 SO libspdk_event_keyring.so.1.0 00:03:09.728 SO libspdk_event_vhost_blk.so.3.0 00:03:09.728 SO libspdk_event_vmd.so.6.0 00:03:09.728 SO libspdk_event_sock.so.5.0 00:03:09.728 SO libspdk_event_iobuf.so.3.0 00:03:09.728 SYMLINK libspdk_event_keyring.so 00:03:09.728 SYMLINK libspdk_event_scheduler.so 00:03:09.728 SYMLINK libspdk_event_vhost_blk.so 00:03:09.728 SYMLINK libspdk_event_vmd.so 00:03:09.728 SYMLINK libspdk_event_sock.so 00:03:09.728 SYMLINK libspdk_event_iobuf.so 00:03:09.987 CC module/event/subsystems/accel/accel.o 00:03:10.246 LIB libspdk_event_accel.a 00:03:10.246 SO libspdk_event_accel.so.6.0 00:03:10.246 SYMLINK libspdk_event_accel.so 00:03:10.504 CC module/event/subsystems/bdev/bdev.o 00:03:10.762 LIB libspdk_event_bdev.a 00:03:10.762 SO libspdk_event_bdev.so.6.0 00:03:10.762 SYMLINK libspdk_event_bdev.so 00:03:11.019 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.019 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.019 CC module/event/subsystems/ublk/ublk.o 00:03:11.019 CC module/event/subsystems/nbd/nbd.o 00:03:11.019 CC module/event/subsystems/scsi/scsi.o 00:03:11.275 LIB libspdk_event_nbd.a 00:03:11.275 LIB libspdk_event_ublk.a 00:03:11.275 LIB libspdk_event_scsi.a 00:03:11.275 SO libspdk_event_nbd.so.6.0 00:03:11.275 SO libspdk_event_ublk.so.3.0 00:03:11.275 SO libspdk_event_scsi.so.6.0 00:03:11.275 SYMLINK libspdk_event_nbd.so 00:03:11.275 SYMLINK libspdk_event_ublk.so 00:03:11.275 LIB libspdk_event_nvmf.a 00:03:11.275 SYMLINK libspdk_event_scsi.so 00:03:11.275 SO libspdk_event_nvmf.so.6.0 00:03:11.532 SYMLINK libspdk_event_nvmf.so 00:03:11.532 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.532 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.790 LIB libspdk_event_vhost_scsi.a 00:03:11.790 LIB libspdk_event_iscsi.a 00:03:11.790 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.790 SO libspdk_event_iscsi.so.6.0 00:03:11.790 SYMLINK libspdk_event_vhost_scsi.so 00:03:12.048 SYMLINK libspdk_event_iscsi.so 00:03:12.048 SO libspdk.so.6.0 00:03:12.048 SYMLINK libspdk.so 00:03:12.305 TEST_HEADER include/spdk/accel.h 00:03:12.305 CC app/trace_record/trace_record.o 00:03:12.305 TEST_HEADER include/spdk/accel_module.h 00:03:12.305 CXX app/trace/trace.o 00:03:12.305 TEST_HEADER include/spdk/assert.h 00:03:12.305 CC test/rpc_client/rpc_client_test.o 00:03:12.305 TEST_HEADER include/spdk/barrier.h 00:03:12.305 TEST_HEADER include/spdk/base64.h 00:03:12.305 TEST_HEADER include/spdk/bdev.h 00:03:12.305 TEST_HEADER include/spdk/bdev_module.h 00:03:12.305 TEST_HEADER include/spdk/bdev_zone.h 00:03:12.305 TEST_HEADER include/spdk/bit_array.h 00:03:12.305 TEST_HEADER include/spdk/bit_pool.h 00:03:12.305 TEST_HEADER include/spdk/blob_bdev.h 00:03:12.305 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:12.305 TEST_HEADER include/spdk/blobfs.h 00:03:12.305 TEST_HEADER include/spdk/blob.h 00:03:12.305 TEST_HEADER include/spdk/conf.h 00:03:12.305 TEST_HEADER include/spdk/config.h 00:03:12.305 TEST_HEADER include/spdk/cpuset.h 00:03:12.305 TEST_HEADER include/spdk/crc16.h 00:03:12.305 TEST_HEADER include/spdk/crc32.h 00:03:12.305 TEST_HEADER include/spdk/crc64.h 00:03:12.305 TEST_HEADER include/spdk/dif.h 00:03:12.305 TEST_HEADER include/spdk/dma.h 00:03:12.305 TEST_HEADER include/spdk/endian.h 00:03:12.305 TEST_HEADER include/spdk/env_dpdk.h 00:03:12.305 TEST_HEADER include/spdk/env.h 00:03:12.305 CC app/nvmf_tgt/nvmf_main.o 00:03:12.305 TEST_HEADER include/spdk/event.h 00:03:12.305 TEST_HEADER include/spdk/fd_group.h 00:03:12.305 TEST_HEADER include/spdk/fd.h 00:03:12.305 TEST_HEADER include/spdk/file.h 00:03:12.305 TEST_HEADER include/spdk/ftl.h 00:03:12.305 TEST_HEADER include/spdk/gpt_spec.h 00:03:12.305 TEST_HEADER include/spdk/hexlify.h 00:03:12.305 TEST_HEADER include/spdk/histogram_data.h 00:03:12.305 TEST_HEADER include/spdk/idxd.h 00:03:12.305 TEST_HEADER include/spdk/idxd_spec.h 00:03:12.305 TEST_HEADER include/spdk/init.h 00:03:12.305 TEST_HEADER include/spdk/ioat.h 00:03:12.305 TEST_HEADER include/spdk/ioat_spec.h 00:03:12.305 TEST_HEADER include/spdk/iscsi_spec.h 00:03:12.305 CC test/thread/poller_perf/poller_perf.o 00:03:12.305 TEST_HEADER include/spdk/json.h 00:03:12.305 TEST_HEADER include/spdk/jsonrpc.h 00:03:12.563 TEST_HEADER include/spdk/keyring.h 00:03:12.563 TEST_HEADER include/spdk/keyring_module.h 00:03:12.563 TEST_HEADER include/spdk/likely.h 00:03:12.563 CC examples/util/zipf/zipf.o 00:03:12.563 TEST_HEADER include/spdk/log.h 00:03:12.563 TEST_HEADER include/spdk/lvol.h 00:03:12.563 TEST_HEADER include/spdk/memory.h 00:03:12.563 TEST_HEADER include/spdk/mmio.h 00:03:12.563 TEST_HEADER include/spdk/nbd.h 00:03:12.563 TEST_HEADER include/spdk/notify.h 00:03:12.563 TEST_HEADER include/spdk/nvme.h 00:03:12.563 TEST_HEADER include/spdk/nvme_intel.h 00:03:12.563 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:12.563 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:12.563 TEST_HEADER include/spdk/nvme_spec.h 00:03:12.563 TEST_HEADER include/spdk/nvme_zns.h 00:03:12.563 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:12.563 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:12.563 TEST_HEADER include/spdk/nvmf.h 00:03:12.563 TEST_HEADER include/spdk/nvmf_spec.h 00:03:12.563 CC test/app/bdev_svc/bdev_svc.o 00:03:12.563 TEST_HEADER include/spdk/nvmf_transport.h 00:03:12.563 TEST_HEADER include/spdk/opal.h 00:03:12.563 TEST_HEADER include/spdk/opal_spec.h 00:03:12.563 CC test/dma/test_dma/test_dma.o 00:03:12.563 TEST_HEADER include/spdk/pci_ids.h 00:03:12.563 TEST_HEADER include/spdk/pipe.h 00:03:12.563 TEST_HEADER include/spdk/queue.h 00:03:12.563 TEST_HEADER include/spdk/reduce.h 00:03:12.563 TEST_HEADER include/spdk/rpc.h 00:03:12.563 TEST_HEADER include/spdk/scheduler.h 00:03:12.563 TEST_HEADER include/spdk/scsi.h 00:03:12.563 TEST_HEADER include/spdk/scsi_spec.h 00:03:12.563 TEST_HEADER include/spdk/sock.h 00:03:12.563 TEST_HEADER include/spdk/stdinc.h 00:03:12.563 TEST_HEADER include/spdk/string.h 00:03:12.563 TEST_HEADER include/spdk/thread.h 00:03:12.563 TEST_HEADER include/spdk/trace.h 00:03:12.563 TEST_HEADER include/spdk/trace_parser.h 00:03:12.563 TEST_HEADER include/spdk/tree.h 00:03:12.563 TEST_HEADER include/spdk/ublk.h 00:03:12.563 CC test/env/mem_callbacks/mem_callbacks.o 00:03:12.563 TEST_HEADER include/spdk/util.h 00:03:12.563 TEST_HEADER include/spdk/uuid.h 00:03:12.563 TEST_HEADER include/spdk/version.h 00:03:12.563 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.563 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.563 TEST_HEADER include/spdk/vhost.h 00:03:12.563 TEST_HEADER include/spdk/vmd.h 00:03:12.563 TEST_HEADER include/spdk/xor.h 00:03:12.563 TEST_HEADER include/spdk/zipf.h 00:03:12.563 CXX test/cpp_headers/accel.o 00:03:12.563 LINK rpc_client_test 00:03:12.563 LINK poller_perf 00:03:12.563 LINK nvmf_tgt 00:03:12.563 LINK zipf 00:03:12.563 LINK spdk_trace_record 00:03:12.563 LINK bdev_svc 00:03:12.821 CXX test/cpp_headers/accel_module.o 00:03:12.821 CC test/env/vtophys/vtophys.o 00:03:12.821 LINK spdk_trace 00:03:12.821 LINK test_dma 00:03:12.821 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.821 CXX test/cpp_headers/assert.o 00:03:12.821 CC app/spdk_lspci/spdk_lspci.o 00:03:13.078 CC examples/ioat/perf/perf.o 00:03:13.078 LINK vtophys 00:03:13.078 CC app/spdk_tgt/spdk_tgt.o 00:03:13.078 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.078 LINK spdk_lspci 00:03:13.078 CXX test/cpp_headers/barrier.o 00:03:13.078 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.078 LINK iscsi_tgt 00:03:13.078 LINK mem_callbacks 00:03:13.078 LINK ioat_perf 00:03:13.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.336 LINK spdk_tgt 00:03:13.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.336 CC app/spdk_nvme_perf/perf.o 00:03:13.336 CXX test/cpp_headers/base64.o 00:03:13.336 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.336 CC examples/ioat/verify/verify.o 00:03:13.336 CXX test/cpp_headers/bdev.o 00:03:13.336 CC test/env/memory/memory_ut.o 00:03:13.336 CXX test/cpp_headers/bdev_module.o 00:03:13.593 CC app/spdk_nvme_identify/identify.o 00:03:13.593 LINK nvme_fuzz 00:03:13.593 LINK env_dpdk_post_init 00:03:13.593 CXX test/cpp_headers/bdev_zone.o 00:03:13.593 LINK vhost_fuzz 00:03:13.593 LINK verify 00:03:13.874 CC app/spdk_nvme_discover/discovery_aer.o 00:03:13.874 CXX test/cpp_headers/bit_array.o 00:03:13.874 CC examples/vmd/lsvmd/lsvmd.o 00:03:13.874 CC examples/vmd/led/led.o 00:03:13.874 CC test/app/histogram_perf/histogram_perf.o 00:03:13.874 LINK spdk_nvme_discover 00:03:13.874 CC examples/idxd/perf/perf.o 00:03:13.874 CXX test/cpp_headers/bit_pool.o 00:03:13.874 LINK lsvmd 00:03:14.132 LINK led 00:03:14.132 LINK histogram_perf 00:03:14.132 CXX test/cpp_headers/blob_bdev.o 00:03:14.132 LINK spdk_nvme_perf 00:03:14.132 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.132 CXX test/cpp_headers/blobfs.o 00:03:14.132 CXX test/cpp_headers/blob.o 00:03:14.132 LINK spdk_nvme_identify 00:03:14.390 CC test/event/event_perf/event_perf.o 00:03:14.390 LINK idxd_perf 00:03:14.390 CC test/event/reactor/reactor.o 00:03:14.390 CC test/event/reactor_perf/reactor_perf.o 00:03:14.390 CXX test/cpp_headers/conf.o 00:03:14.390 CC test/event/app_repeat/app_repeat.o 00:03:14.390 CC app/spdk_top/spdk_top.o 00:03:14.390 LINK event_perf 00:03:14.648 CC test/event/scheduler/scheduler.o 00:03:14.648 LINK reactor_perf 00:03:14.648 LINK reactor 00:03:14.648 CXX test/cpp_headers/config.o 00:03:14.648 LINK memory_ut 00:03:14.648 CXX test/cpp_headers/cpuset.o 00:03:14.648 LINK app_repeat 00:03:14.648 CXX test/cpp_headers/crc16.o 00:03:14.648 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.648 CXX test/cpp_headers/crc32.o 00:03:14.905 LINK scheduler 00:03:14.905 LINK iscsi_fuzz 00:03:14.905 CC app/vhost/vhost.o 00:03:14.905 CC app/spdk_dd/spdk_dd.o 00:03:14.905 LINK interrupt_tgt 00:03:14.905 CC test/app/jsoncat/jsoncat.o 00:03:14.905 CXX test/cpp_headers/crc64.o 00:03:14.905 CC app/fio/nvme/fio_plugin.o 00:03:14.905 CC test/env/pci/pci_ut.o 00:03:14.905 CXX test/cpp_headers/dif.o 00:03:15.162 LINK jsoncat 00:03:15.162 LINK vhost 00:03:15.162 CXX test/cpp_headers/dma.o 00:03:15.162 CXX test/cpp_headers/endian.o 00:03:15.162 CC app/fio/bdev/fio_plugin.o 00:03:15.162 CXX test/cpp_headers/env_dpdk.o 00:03:15.419 CC test/app/stub/stub.o 00:03:15.419 CC examples/thread/thread/thread_ex.o 00:03:15.419 LINK spdk_dd 00:03:15.419 LINK spdk_top 00:03:15.419 LINK pci_ut 00:03:15.419 CXX test/cpp_headers/env.o 00:03:15.419 CC test/nvme/aer/aer.o 00:03:15.419 LINK stub 00:03:15.419 CXX test/cpp_headers/event.o 00:03:15.419 LINK spdk_nvme 00:03:15.676 CXX test/cpp_headers/fd_group.o 00:03:15.677 CC examples/sock/hello_world/hello_sock.o 00:03:15.677 LINK thread 00:03:15.677 CC test/nvme/reset/reset.o 00:03:15.677 CXX test/cpp_headers/fd.o 00:03:15.677 LINK aer 00:03:15.677 LINK spdk_bdev 00:03:15.677 CC test/nvme/sgl/sgl.o 00:03:15.934 CC test/blobfs/mkfs/mkfs.o 00:03:15.934 CC test/accel/dif/dif.o 00:03:15.934 LINK hello_sock 00:03:15.934 CXX test/cpp_headers/file.o 00:03:15.934 CC test/nvme/e2edp/nvme_dp.o 00:03:15.934 CC test/lvol/esnap/esnap.o 00:03:15.934 LINK reset 00:03:15.934 CC test/nvme/err_injection/err_injection.o 00:03:15.934 CC test/nvme/overhead/overhead.o 00:03:16.191 LINK sgl 00:03:16.191 LINK mkfs 00:03:16.191 CXX test/cpp_headers/ftl.o 00:03:16.191 LINK err_injection 00:03:16.191 LINK nvme_dp 00:03:16.191 CC examples/accel/perf/accel_perf.o 00:03:16.191 CC test/nvme/startup/startup.o 00:03:16.191 LINK overhead 00:03:16.191 CXX test/cpp_headers/gpt_spec.o 00:03:16.191 LINK dif 00:03:16.448 CC test/nvme/reserve/reserve.o 00:03:16.448 LINK startup 00:03:16.448 CXX test/cpp_headers/hexlify.o 00:03:16.448 CC test/nvme/simple_copy/simple_copy.o 00:03:16.448 CC test/nvme/connect_stress/connect_stress.o 00:03:16.448 CC examples/blob/hello_world/hello_blob.o 00:03:16.448 CC test/nvme/boot_partition/boot_partition.o 00:03:16.448 LINK reserve 00:03:16.712 CXX test/cpp_headers/histogram_data.o 00:03:16.712 CC examples/blob/cli/blobcli.o 00:03:16.712 LINK connect_stress 00:03:16.712 LINK accel_perf 00:03:16.712 LINK simple_copy 00:03:16.712 LINK boot_partition 00:03:16.712 CXX test/cpp_headers/idxd.o 00:03:16.712 LINK hello_blob 00:03:16.712 CC test/nvme/compliance/nvme_compliance.o 00:03:17.001 CC test/bdev/bdevio/bdevio.o 00:03:17.001 CC test/nvme/fused_ordering/fused_ordering.o 00:03:17.001 CXX test/cpp_headers/idxd_spec.o 00:03:17.001 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:17.001 CC examples/nvme/hello_world/hello_world.o 00:03:17.001 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.260 LINK blobcli 00:03:17.260 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.260 LINK nvme_compliance 00:03:17.260 CXX test/cpp_headers/init.o 00:03:17.260 LINK fused_ordering 00:03:17.260 LINK doorbell_aers 00:03:17.260 LINK bdevio 00:03:17.260 LINK hello_world 00:03:17.260 CXX test/cpp_headers/ioat.o 00:03:17.260 CXX test/cpp_headers/ioat_spec.o 00:03:17.260 LINK hello_bdev 00:03:17.518 CXX test/cpp_headers/iscsi_spec.o 00:03:17.518 CC examples/nvme/reconnect/reconnect.o 00:03:17.518 CC test/nvme/fdp/fdp.o 00:03:17.518 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:17.518 CXX test/cpp_headers/json.o 00:03:17.518 CC test/nvme/cuse/cuse.o 00:03:17.518 CC examples/nvme/arbitration/arbitration.o 00:03:17.518 CC examples/nvme/hotplug/hotplug.o 00:03:17.777 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.777 CXX test/cpp_headers/jsonrpc.o 00:03:17.777 LINK fdp 00:03:17.777 LINK reconnect 00:03:17.777 LINK cmb_copy 00:03:17.777 LINK hotplug 00:03:17.777 LINK bdevperf 00:03:17.777 CXX test/cpp_headers/keyring.o 00:03:18.034 CXX test/cpp_headers/keyring_module.o 00:03:18.034 CXX test/cpp_headers/likely.o 00:03:18.034 LINK arbitration 00:03:18.034 LINK nvme_manage 00:03:18.034 CXX test/cpp_headers/log.o 00:03:18.034 CXX test/cpp_headers/lvol.o 00:03:18.034 CXX test/cpp_headers/memory.o 00:03:18.034 CXX test/cpp_headers/mmio.o 00:03:18.034 CXX test/cpp_headers/nbd.o 00:03:18.034 CXX test/cpp_headers/notify.o 00:03:18.034 CC examples/nvme/abort/abort.o 00:03:18.034 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:18.291 CXX test/cpp_headers/nvme.o 00:03:18.291 CXX test/cpp_headers/nvme_intel.o 00:03:18.291 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.292 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.292 CXX test/cpp_headers/nvme_spec.o 00:03:18.292 CXX test/cpp_headers/nvme_zns.o 00:03:18.292 LINK pmr_persistence 00:03:18.292 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.551 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.551 CXX test/cpp_headers/nvmf.o 00:03:18.551 CXX test/cpp_headers/nvmf_spec.o 00:03:18.551 CXX test/cpp_headers/nvmf_transport.o 00:03:18.551 CXX test/cpp_headers/opal.o 00:03:18.551 LINK abort 00:03:18.551 CXX test/cpp_headers/opal_spec.o 00:03:18.551 CXX test/cpp_headers/pci_ids.o 00:03:18.551 CXX test/cpp_headers/pipe.o 00:03:18.551 CXX test/cpp_headers/queue.o 00:03:18.551 CXX test/cpp_headers/reduce.o 00:03:18.551 CXX test/cpp_headers/rpc.o 00:03:18.551 CXX test/cpp_headers/scheduler.o 00:03:18.809 CXX test/cpp_headers/scsi.o 00:03:18.809 CXX test/cpp_headers/scsi_spec.o 00:03:18.809 CXX test/cpp_headers/sock.o 00:03:18.809 CXX test/cpp_headers/stdinc.o 00:03:18.809 CXX test/cpp_headers/string.o 00:03:18.809 CXX test/cpp_headers/thread.o 00:03:18.809 CXX test/cpp_headers/trace.o 00:03:18.809 CXX test/cpp_headers/trace_parser.o 00:03:18.809 LINK cuse 00:03:18.809 CXX test/cpp_headers/tree.o 00:03:18.809 CC examples/nvmf/nvmf/nvmf.o 00:03:18.809 CXX test/cpp_headers/ublk.o 00:03:19.066 CXX test/cpp_headers/util.o 00:03:19.066 CXX test/cpp_headers/uuid.o 00:03:19.066 CXX test/cpp_headers/version.o 00:03:19.066 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.066 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.066 CXX test/cpp_headers/vhost.o 00:03:19.066 CXX test/cpp_headers/vmd.o 00:03:19.066 CXX test/cpp_headers/xor.o 00:03:19.066 CXX test/cpp_headers/zipf.o 00:03:19.324 LINK nvmf 00:03:20.698 LINK esnap 00:03:21.349 ************************************ 00:03:21.350 END TEST make 00:03:21.350 ************************************ 00:03:21.350 00:03:21.350 real 1m3.287s 00:03:21.350 user 6m28.248s 00:03:21.350 sys 1m33.212s 00:03:21.350 16:18:06 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:21.350 16:18:06 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.350 16:18:06 -- common/autotest_common.sh@1142 -- $ return 0 00:03:21.350 16:18:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.350 16:18:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.350 16:18:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.350 16:18:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.350 16:18:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.350 16:18:06 -- pm/common@44 -- $ pid=5142 00:03:21.350 16:18:06 -- pm/common@50 -- $ kill -TERM 5142 00:03:21.350 16:18:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.350 16:18:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.350 16:18:06 -- pm/common@44 -- $ pid=5144 00:03:21.350 16:18:06 -- pm/common@50 -- $ kill -TERM 5144 00:03:21.350 16:18:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:21.350 16:18:06 -- nvmf/common.sh@7 -- # uname -s 00:03:21.350 16:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.350 16:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.350 16:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.350 16:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.350 16:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.350 16:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.350 16:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.350 16:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.350 16:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.350 16:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.350 16:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:03:21.350 16:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:03:21.350 16:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.350 16:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.350 16:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:21.350 16:18:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.350 16:18:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:21.350 16:18:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.350 16:18:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.350 16:18:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.350 16:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.350 16:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.350 16:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.350 16:18:06 -- paths/export.sh@5 -- # export PATH 00:03:21.350 16:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.350 16:18:06 -- nvmf/common.sh@47 -- # : 0 00:03:21.350 16:18:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:21.350 16:18:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:21.350 16:18:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.350 16:18:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.350 16:18:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.350 16:18:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:21.350 16:18:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:21.350 16:18:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:21.350 16:18:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.350 16:18:06 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.350 16:18:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.350 16:18:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.350 16:18:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.350 16:18:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.350 16:18:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.350 16:18:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.350 16:18:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.350 16:18:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.350 16:18:06 -- spdk/autotest.sh@48 -- # udevadm_pid=52777 00:03:21.350 16:18:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.350 16:18:06 -- pm/common@17 -- # local monitor 00:03:21.350 16:18:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.350 16:18:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.350 16:18:06 -- pm/common@25 -- # sleep 1 00:03:21.350 16:18:06 -- pm/common@21 -- # date +%s 00:03:21.350 16:18:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.350 16:18:06 -- pm/common@21 -- # date +%s 00:03:21.350 16:18:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721060286 00:03:21.350 16:18:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721060286 00:03:21.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721060286_collect-vmstat.pm.log 00:03:21.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721060286_collect-cpu-load.pm.log 00:03:22.724 16:18:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.724 16:18:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.724 16:18:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.724 16:18:07 -- common/autotest_common.sh@10 -- # set +x 00:03:22.724 16:18:07 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.724 16:18:07 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:22.724 16:18:07 -- common/autotest_common.sh@10 -- # set +x 00:03:22.724 16:18:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.724 16:18:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.724 16:18:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.724 16:18:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.724 16:18:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.724 16:18:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.724 16:18:07 -- common/autotest_common.sh@1455 -- # uname 00:03:22.724 16:18:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:22.724 16:18:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.724 16:18:07 -- common/autotest_common.sh@1475 -- # uname 00:03:22.724 16:18:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:22.724 16:18:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:22.724 16:18:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:22.724 16:18:07 -- spdk/autotest.sh@72 -- # hash lcov 00:03:22.724 16:18:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:22.724 16:18:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:22.724 --rc lcov_branch_coverage=1 00:03:22.724 --rc lcov_function_coverage=1 00:03:22.724 --rc genhtml_branch_coverage=1 00:03:22.724 --rc genhtml_function_coverage=1 00:03:22.724 --rc genhtml_legend=1 00:03:22.724 --rc geninfo_all_blocks=1 00:03:22.724 ' 00:03:22.724 16:18:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:22.724 --rc lcov_branch_coverage=1 00:03:22.724 --rc lcov_function_coverage=1 00:03:22.724 --rc genhtml_branch_coverage=1 00:03:22.724 --rc genhtml_function_coverage=1 00:03:22.724 --rc genhtml_legend=1 00:03:22.724 --rc geninfo_all_blocks=1 00:03:22.724 ' 00:03:22.724 16:18:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:22.724 --rc lcov_branch_coverage=1 00:03:22.724 --rc lcov_function_coverage=1 00:03:22.724 --rc genhtml_branch_coverage=1 00:03:22.724 --rc genhtml_function_coverage=1 00:03:22.724 --rc genhtml_legend=1 00:03:22.724 --rc geninfo_all_blocks=1 00:03:22.724 --no-external' 00:03:22.724 16:18:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:22.724 --rc lcov_branch_coverage=1 00:03:22.724 --rc lcov_function_coverage=1 00:03:22.724 --rc genhtml_branch_coverage=1 00:03:22.724 --rc genhtml_function_coverage=1 00:03:22.724 --rc genhtml_legend=1 00:03:22.724 --rc geninfo_all_blocks=1 00:03:22.724 --no-external' 00:03:22.724 16:18:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:22.724 lcov: LCOV version 1.14 00:03:22.724 16:18:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:37.603 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:37.603 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:49.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:49.828 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:49.829 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:49.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:50.098 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:50.098 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:53.413 16:18:38 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:53.413 16:18:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.413 16:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:53.413 16:18:38 -- spdk/autotest.sh@91 -- # rm -f 00:03:53.413 16:18:38 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.980 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:54.238 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:54.238 16:18:39 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:54.238 16:18:39 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:54.238 16:18:39 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:54.238 16:18:39 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:54.238 16:18:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.238 16:18:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:54.238 16:18:39 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:54.238 16:18:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.238 16:18:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.238 16:18:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.238 16:18:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:54.238 16:18:39 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:54.238 16:18:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:54.238 16:18:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.238 16:18:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.238 16:18:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:54.238 16:18:39 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:54.238 16:18:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:54.238 16:18:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.238 16:18:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.238 16:18:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:54.238 16:18:39 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:54.238 16:18:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:54.238 16:18:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.238 16:18:39 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:54.238 16:18:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.238 16:18:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.238 16:18:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:54.238 16:18:39 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:54.238 16:18:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.238 No valid GPT data, bailing 00:03:54.238 16:18:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.238 16:18:39 -- scripts/common.sh@391 -- # pt= 00:03:54.238 16:18:39 -- scripts/common.sh@392 -- # return 1 00:03:54.238 16:18:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.238 1+0 records in 00:03:54.238 1+0 records out 00:03:54.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496514 s, 211 MB/s 00:03:54.238 16:18:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.239 16:18:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.239 16:18:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:54.239 16:18:39 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:54.239 16:18:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:54.239 No valid GPT data, bailing 00:03:54.239 16:18:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:54.239 16:18:39 -- scripts/common.sh@391 -- # pt= 00:03:54.239 16:18:39 -- scripts/common.sh@392 -- # return 1 00:03:54.239 16:18:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:54.239 1+0 records in 00:03:54.239 1+0 records out 00:03:54.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504941 s, 208 MB/s 00:03:54.239 16:18:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.239 16:18:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.239 16:18:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:54.239 16:18:39 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:54.239 16:18:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:54.239 No valid GPT data, bailing 00:03:54.239 16:18:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:54.497 16:18:39 -- scripts/common.sh@391 -- # pt= 00:03:54.497 16:18:39 -- scripts/common.sh@392 -- # return 1 00:03:54.497 16:18:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:54.497 1+0 records in 00:03:54.497 1+0 records out 00:03:54.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053486 s, 196 MB/s 00:03:54.497 16:18:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.497 16:18:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.497 16:18:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:54.497 16:18:39 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:54.497 16:18:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:54.497 No valid GPT data, bailing 00:03:54.497 16:18:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:54.497 16:18:39 -- scripts/common.sh@391 -- # pt= 00:03:54.497 16:18:39 -- scripts/common.sh@392 -- # return 1 00:03:54.497 16:18:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:54.497 1+0 records in 00:03:54.497 1+0 records out 00:03:54.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490045 s, 214 MB/s 00:03:54.497 16:18:39 -- spdk/autotest.sh@118 -- # sync 00:03:54.497 16:18:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.498 16:18:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.498 16:18:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.397 16:18:41 -- spdk/autotest.sh@124 -- # uname -s 00:03:56.397 16:18:41 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:56.397 16:18:41 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.397 16:18:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.397 16:18:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.397 16:18:41 -- common/autotest_common.sh@10 -- # set +x 00:03:56.397 ************************************ 00:03:56.397 START TEST setup.sh 00:03:56.397 ************************************ 00:03:56.397 16:18:41 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.397 * Looking for test storage... 00:03:56.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.397 16:18:41 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:56.397 16:18:41 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:56.397 16:18:41 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.397 16:18:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.397 16:18:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.397 16:18:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.397 ************************************ 00:03:56.397 START TEST acl 00:03:56.397 ************************************ 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.397 * Looking for test storage... 00:03:56.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.397 16:18:41 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:56.397 16:18:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.397 16:18:41 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:56.397 16:18:41 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:56.397 16:18:41 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:56.397 16:18:41 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:56.397 16:18:41 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:56.397 16:18:41 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.397 16:18:41 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.330 16:18:42 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:57.330 16:18:42 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:57.330 16:18:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.330 16:18:42 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:57.330 16:18:42 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.330 16:18:42 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.896 Hugepages 00:03:57.896 node hugesize free / total 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.896 00:03:57.896 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:57.896 16:18:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:58.154 16:18:43 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.154 16:18:43 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.154 16:18:43 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.154 16:18:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.154 ************************************ 00:03:58.154 START TEST denied 00:03:58.154 ************************************ 00:03:58.154 16:18:43 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:58.154 16:18:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:58.154 16:18:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:58.154 16:18:43 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:58.154 16:18:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.154 16:18:43 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.088 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.088 16:18:44 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.656 00:03:59.656 real 0m1.453s 00:03:59.656 user 0m0.578s 00:03:59.656 sys 0m0.798s 00:03:59.656 16:18:44 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.656 ************************************ 00:03:59.656 16:18:44 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:59.656 END TEST denied 00:03:59.656 ************************************ 00:03:59.656 16:18:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:59.656 16:18:45 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:59.656 16:18:45 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.656 16:18:45 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.656 16:18:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:59.656 ************************************ 00:03:59.656 START TEST allowed 00:03:59.656 ************************************ 00:03:59.656 16:18:45 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:59.656 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:59.656 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:59.656 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:59.656 16:18:45 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.656 16:18:45 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.590 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.590 16:18:45 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.158 00:04:01.158 real 0m1.501s 00:04:01.158 user 0m0.672s 00:04:01.158 sys 0m0.829s 00:04:01.158 16:18:46 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.158 16:18:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:01.158 ************************************ 00:04:01.158 END TEST allowed 00:04:01.158 ************************************ 00:04:01.158 16:18:46 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:01.158 00:04:01.158 real 0m4.735s 00:04:01.158 user 0m2.068s 00:04:01.158 sys 0m2.589s 00:04:01.158 16:18:46 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.158 16:18:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.158 ************************************ 00:04:01.158 END TEST acl 00:04:01.158 ************************************ 00:04:01.158 16:18:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:01.158 16:18:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.158 16:18:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.158 16:18:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.158 16:18:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.158 ************************************ 00:04:01.158 START TEST hugepages 00:04:01.158 ************************************ 00:04:01.158 16:18:46 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.158 * Looking for test storage... 00:04:01.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.158 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:01.158 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:01.158 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:01.158 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:01.158 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6023772 kB' 'MemAvailable: 7402352 kB' 'Buffers: 2436 kB' 'Cached: 1592864 kB' 'SwapCached: 0 kB' 'Active: 435844 kB' 'Inactive: 1263952 kB' 'Active(anon): 114984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263952 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 106224 kB' 'Mapped: 48780 kB' 'Shmem: 10488 kB' 'KReclaimable: 61412 kB' 'Slab: 137348 kB' 'SReclaimable: 61412 kB' 'SUnreclaim: 75936 kB' 'KernelStack: 6368 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 335208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:01.419 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.420 16:18:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:01.420 16:18:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.420 16:18:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.420 16:18:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.420 ************************************ 00:04:01.420 START TEST default_setup 00:04:01.420 ************************************ 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.420 16:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.986 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.249 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8090712 kB' 'MemAvailable: 9469124 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452444 kB' 'Inactive: 1263980 kB' 'Active(anon): 131584 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122776 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136912 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75892 kB' 'KernelStack: 6304 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.249 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.250 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8090712 kB' 'MemAvailable: 9469124 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 1263980 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122660 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136908 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75888 kB' 'KernelStack: 6256 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.251 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8090712 kB' 'MemAvailable: 9469124 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452188 kB' 'Inactive: 1263980 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122476 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136908 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75888 kB' 'KernelStack: 6272 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.252 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.253 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:02.254 nr_hugepages=1024 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.254 resv_hugepages=0 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.254 surplus_hugepages=0 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.254 anon_hugepages=0 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8090712 kB' 'MemAvailable: 9469124 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 1263980 kB' 'Active(anon): 131476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122564 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136908 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75888 kB' 'KernelStack: 6240 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8090712 kB' 'MemUsed: 4151248 kB' 'SwapCached: 0 kB' 'Active: 452144 kB' 'Inactive: 1263980 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1595308 kB' 'Mapped: 48700 kB' 'AnonPages: 122436 kB' 'Shmem: 10464 kB' 'KernelStack: 6292 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61020 kB' 'Slab: 136900 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.256 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.257 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.258 node0=1024 expecting 1024 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.258 00:04:02.258 real 0m0.983s 00:04:02.258 user 0m0.466s 00:04:02.258 sys 0m0.458s 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.258 16:18:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:02.258 ************************************ 00:04:02.258 END TEST default_setup 00:04:02.258 ************************************ 00:04:02.258 16:18:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.258 16:18:47 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:02.258 16:18:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.258 16:18:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.258 16:18:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.517 ************************************ 00:04:02.517 START TEST per_node_1G_alloc 00:04:02.517 ************************************ 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.517 16:18:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.781 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.781 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9142840 kB' 'MemAvailable: 10521256 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452776 kB' 'Inactive: 1263984 kB' 'Active(anon): 131916 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136876 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75856 kB' 'KernelStack: 6276 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.781 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.782 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9142840 kB' 'MemAvailable: 10521256 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 1263984 kB' 'Active(anon): 131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122620 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136860 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75840 kB' 'KernelStack: 6212 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.783 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.784 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9142848 kB' 'MemAvailable: 10521264 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452416 kB' 'Inactive: 1263984 kB' 'Active(anon): 131556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122688 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136872 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75852 kB' 'KernelStack: 6256 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.785 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.786 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.787 nr_hugepages=512 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:02.787 resv_hugepages=0 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.787 surplus_hugepages=0 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.787 anon_hugepages=0 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9142848 kB' 'MemAvailable: 10521264 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452100 kB' 'Inactive: 1263984 kB' 'Active(anon): 131240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122372 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136872 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75852 kB' 'KernelStack: 6256 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.787 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.788 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9142848 kB' 'MemUsed: 3099112 kB' 'SwapCached: 0 kB' 'Active: 452324 kB' 'Inactive: 1263984 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1595308 kB' 'Mapped: 48644 kB' 'AnonPages: 122596 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61020 kB' 'Slab: 136872 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.789 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.790 node0=512 expecting 512 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.790 00:04:02.790 real 0m0.499s 00:04:02.790 user 0m0.251s 00:04:02.790 sys 0m0.279s 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.790 16:18:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.790 ************************************ 00:04:02.790 END TEST per_node_1G_alloc 00:04:02.790 ************************************ 00:04:03.048 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.048 16:18:48 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.048 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.048 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.048 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.048 ************************************ 00:04:03.048 START TEST even_2G_alloc 00:04:03.048 ************************************ 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.048 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.310 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.310 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.310 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8091792 kB' 'MemAvailable: 9470208 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452512 kB' 'Inactive: 1263984 kB' 'Active(anon): 131652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122764 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136856 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75836 kB' 'KernelStack: 6260 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.311 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8091540 kB' 'MemAvailable: 9469956 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452464 kB' 'Inactive: 1263984 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123052 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136856 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75836 kB' 'KernelStack: 6304 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.312 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.313 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8091792 kB' 'MemAvailable: 9470204 kB' 'Buffers: 2436 kB' 'Cached: 1592868 kB' 'SwapCached: 0 kB' 'Active: 452340 kB' 'Inactive: 1263980 kB' 'Active(anon): 131480 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122604 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136856 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75836 kB' 'KernelStack: 6224 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.314 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.315 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.316 nr_hugepages=1024 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.316 resv_hugepages=0 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.316 surplus_hugepages=0 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.316 anon_hugepages=0 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8091792 kB' 'MemAvailable: 9470208 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452056 kB' 'Inactive: 1263984 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136820 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75800 kB' 'KernelStack: 6256 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.316 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8091792 kB' 'MemUsed: 4150168 kB' 'SwapCached: 0 kB' 'Active: 451984 kB' 'Inactive: 1263984 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1595308 kB' 'Mapped: 48644 kB' 'AnonPages: 122220 kB' 'Shmem: 10464 kB' 'KernelStack: 6276 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61020 kB' 'Slab: 136816 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.317 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.318 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.319 node0=1024 expecting 1024 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.319 00:04:03.319 real 0m0.480s 00:04:03.319 user 0m0.257s 00:04:03.319 sys 0m0.254s 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.319 16:18:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.319 ************************************ 00:04:03.319 END TEST even_2G_alloc 00:04:03.319 ************************************ 00:04:03.319 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.319 16:18:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:03.319 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.319 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.319 16:18:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.577 ************************************ 00:04:03.577 START TEST odd_alloc 00:04:03.577 ************************************ 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.577 16:18:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.839 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.839 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8101584 kB' 'MemAvailable: 9480000 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452668 kB' 'Inactive: 1263984 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123208 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136816 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6320 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.839 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.840 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8101332 kB' 'MemAvailable: 9479748 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1263984 kB' 'Active(anon): 131468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136816 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6288 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.841 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8101332 kB' 'MemAvailable: 9479748 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452088 kB' 'Inactive: 1263984 kB' 'Active(anon): 131228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122416 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136816 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6288 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.842 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.843 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.844 nr_hugepages=1025 00:04:03.844 resv_hugepages=0 00:04:03.844 surplus_hugepages=0 00:04:03.844 anon_hugepages=0 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.844 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8101332 kB' 'MemAvailable: 9479748 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452080 kB' 'Inactive: 1263984 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122672 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136816 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75796 kB' 'KernelStack: 6288 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.845 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.846 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8101332 kB' 'MemUsed: 4140628 kB' 'SwapCached: 0 kB' 'Active: 452048 kB' 'Inactive: 1263984 kB' 'Active(anon): 131188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1595308 kB' 'Mapped: 48644 kB' 'AnonPages: 122608 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61020 kB' 'Slab: 136812 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.847 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.848 node0=1025 expecting 1025 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:03.848 00:04:03.848 real 0m0.518s 00:04:03.848 user 0m0.240s 00:04:03.848 sys 0m0.293s 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.848 16:18:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.848 ************************************ 00:04:03.848 END TEST odd_alloc 00:04:03.848 ************************************ 00:04:04.107 16:18:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:04.107 16:18:49 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:04.107 16:18:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.107 16:18:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.107 16:18:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.107 ************************************ 00:04:04.107 START TEST custom_alloc 00:04:04.107 ************************************ 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.107 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.402 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.402 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9147952 kB' 'MemAvailable: 10526372 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 452788 kB' 'Inactive: 1263988 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123152 kB' 'Mapped: 49044 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136872 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75852 kB' 'KernelStack: 6324 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.402 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9147952 kB' 'MemAvailable: 10526368 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452144 kB' 'Inactive: 1263984 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122644 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136876 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75856 kB' 'KernelStack: 6256 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.403 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.404 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9147952 kB' 'MemAvailable: 10526368 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452052 kB' 'Inactive: 1263984 kB' 'Active(anon): 131192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122548 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136876 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75856 kB' 'KernelStack: 6256 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.405 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.406 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.407 nr_hugepages=512 00:04:04.407 resv_hugepages=0 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.407 surplus_hugepages=0 00:04:04.407 anon_hugepages=0 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9147952 kB' 'MemAvailable: 10526368 kB' 'Buffers: 2436 kB' 'Cached: 1592872 kB' 'SwapCached: 0 kB' 'Active: 452056 kB' 'Inactive: 1263984 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122588 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136860 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75840 kB' 'KernelStack: 6272 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 9147952 kB' 'MemUsed: 3094008 kB' 'SwapCached: 0 kB' 'Active: 452056 kB' 'Inactive: 1263984 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1595308 kB' 'Mapped: 48644 kB' 'AnonPages: 122588 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61020 kB' 'Slab: 136856 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.698 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.699 node0=512 expecting 512 00:04:04.699 ************************************ 00:04:04.699 END TEST custom_alloc 00:04:04.699 ************************************ 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:04.699 00:04:04.699 real 0m0.529s 00:04:04.699 user 0m0.262s 00:04:04.699 sys 0m0.282s 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.699 16:18:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.699 16:18:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:04.699 16:18:50 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:04.699 16:18:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.699 16:18:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.699 16:18:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.699 ************************************ 00:04:04.699 START TEST no_shrink_alloc 00:04:04.699 ************************************ 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.699 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.963 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.963 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8098340 kB' 'MemAvailable: 9476760 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 453120 kB' 'Inactive: 1263988 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123384 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136888 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75868 kB' 'KernelStack: 6324 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.963 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8098340 kB' 'MemAvailable: 9476760 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 452412 kB' 'Inactive: 1263988 kB' 'Active(anon): 131552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122712 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136908 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75888 kB' 'KernelStack: 6288 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.964 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8098340 kB' 'MemAvailable: 9476760 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 452160 kB' 'Inactive: 1263988 kB' 'Active(anon): 131300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122452 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136908 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75888 kB' 'KernelStack: 6288 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.965 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.966 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.967 nr_hugepages=1024 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.967 resv_hugepages=0 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.967 surplus_hugepages=0 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.967 anon_hugepages=0 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8098340 kB' 'MemAvailable: 9476760 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 452160 kB' 'Inactive: 1263988 kB' 'Active(anon): 131300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122452 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136908 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75888 kB' 'KernelStack: 6288 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 352336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.967 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.968 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8098340 kB' 'MemUsed: 4143620 kB' 'SwapCached: 0 kB' 'Active: 452164 kB' 'Inactive: 1263988 kB' 'Active(anon): 131304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1595312 kB' 'Mapped: 48644 kB' 'AnonPages: 122676 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61020 kB' 'Slab: 136908 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.969 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.229 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.230 node0=1024 expecting 1024 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.230 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.494 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.494 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.494 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8107676 kB' 'MemAvailable: 9486096 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 448164 kB' 'Inactive: 1263988 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118204 kB' 'Mapped: 48188 kB' 'Shmem: 10464 kB' 'KReclaimable: 61020 kB' 'Slab: 136836 kB' 'SReclaimable: 61020 kB' 'SUnreclaim: 75816 kB' 'KernelStack: 6228 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.494 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.495 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8107824 kB' 'MemAvailable: 9486236 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 448024 kB' 'Inactive: 1263988 kB' 'Active(anon): 127164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118268 kB' 'Mapped: 48028 kB' 'Shmem: 10464 kB' 'KReclaimable: 61004 kB' 'Slab: 136740 kB' 'SReclaimable: 61004 kB' 'SUnreclaim: 75736 kB' 'KernelStack: 6116 kB' 'PageTables: 3540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.496 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.497 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8107824 kB' 'MemAvailable: 9486236 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 447724 kB' 'Inactive: 1263988 kB' 'Active(anon): 126864 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117956 kB' 'Mapped: 47904 kB' 'Shmem: 10464 kB' 'KReclaimable: 61004 kB' 'Slab: 136772 kB' 'SReclaimable: 61004 kB' 'SUnreclaim: 75768 kB' 'KernelStack: 6144 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.498 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.499 nr_hugepages=1024 00:04:05.499 resv_hugepages=0 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.499 surplus_hugepages=0 00:04:05.499 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.500 anon_hugepages=0 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8107824 kB' 'MemAvailable: 9486236 kB' 'Buffers: 2436 kB' 'Cached: 1592876 kB' 'SwapCached: 0 kB' 'Active: 447396 kB' 'Inactive: 1263988 kB' 'Active(anon): 126536 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117660 kB' 'Mapped: 47904 kB' 'Shmem: 10464 kB' 'KReclaimable: 61004 kB' 'Slab: 136772 kB' 'SReclaimable: 61004 kB' 'SUnreclaim: 75768 kB' 'KernelStack: 6128 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.500 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.501 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8107824 kB' 'MemUsed: 4134136 kB' 'SwapCached: 0 kB' 'Active: 447604 kB' 'Inactive: 1263988 kB' 'Active(anon): 126744 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1263988 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1595312 kB' 'Mapped: 47904 kB' 'AnonPages: 117608 kB' 'Shmem: 10464 kB' 'KernelStack: 6180 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61004 kB' 'Slab: 136772 kB' 'SReclaimable: 61004 kB' 'SUnreclaim: 75768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.502 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.503 node0=1024 expecting 1024 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.503 00:04:05.503 real 0m0.984s 00:04:05.503 user 0m0.479s 00:04:05.503 sys 0m0.569s 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.503 16:18:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.503 ************************************ 00:04:05.503 END TEST no_shrink_alloc 00:04:05.503 ************************************ 00:04:05.503 16:18:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.503 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.503 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.503 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.761 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.761 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.761 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.761 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.761 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.761 16:18:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.761 00:04:05.761 real 0m4.420s 00:04:05.761 user 0m2.113s 00:04:05.761 sys 0m2.391s 00:04:05.761 16:18:51 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.761 16:18:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.761 ************************************ 00:04:05.761 END TEST hugepages 00:04:05.761 ************************************ 00:04:05.762 16:18:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.762 16:18:51 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:05.762 16:18:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.762 16:18:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.762 16:18:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.762 ************************************ 00:04:05.762 START TEST driver 00:04:05.762 ************************************ 00:04:05.762 16:18:51 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:05.762 * Looking for test storage... 00:04:05.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.762 16:18:51 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:05.762 16:18:51 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.762 16:18:51 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.326 16:18:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:06.327 16:18:51 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.327 16:18:51 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.327 16:18:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.327 ************************************ 00:04:06.327 START TEST guess_driver 00:04:06.327 ************************************ 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:06.327 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:06.327 Looking for driver=uio_pci_generic 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.327 16:18:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.893 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:06.893 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:06.893 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.151 16:18:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.780 00:04:07.780 real 0m1.429s 00:04:07.780 user 0m0.507s 00:04:07.780 sys 0m0.918s 00:04:07.780 16:18:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.780 16:18:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.780 ************************************ 00:04:07.780 END TEST guess_driver 00:04:07.780 ************************************ 00:04:07.780 16:18:53 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:07.780 00:04:07.780 real 0m2.111s 00:04:07.780 user 0m0.743s 00:04:07.780 sys 0m1.418s 00:04:07.780 16:18:53 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.780 16:18:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:07.780 ************************************ 00:04:07.780 END TEST driver 00:04:07.780 ************************************ 00:04:07.780 16:18:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:07.780 16:18:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:07.780 16:18:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.780 16:18:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.780 16:18:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.780 ************************************ 00:04:07.780 START TEST devices 00:04:07.780 ************************************ 00:04:07.780 16:18:53 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:08.039 * Looking for test storage... 00:04:08.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.039 16:18:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:08.039 16:18:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:08.039 16:18:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.039 16:18:53 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.603 16:18:54 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:08.603 16:18:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:08.604 16:18:54 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.604 16:18:54 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:08.604 16:18:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:08.604 16:18:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:08.604 No valid GPT data, bailing 00:04:08.604 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.604 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:08.604 16:18:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:08.604 16:18:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:08.604 16:18:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:08.604 16:18:54 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:08.604 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:08.604 16:18:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:08.604 16:18:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:08.862 No valid GPT data, bailing 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:08.862 No valid GPT data, bailing 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:08.862 No valid GPT data, bailing 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:08.862 16:18:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:08.862 16:18:54 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:08.862 16:18:54 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:08.862 16:18:54 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.862 16:18:54 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.862 16:18:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:08.862 ************************************ 00:04:08.862 START TEST nvme_mount 00:04:08.862 ************************************ 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:08.862 16:18:54 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.235 Creating new GPT entries in memory. 00:04:10.235 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.235 other utilities. 00:04:10.235 16:18:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.235 16:18:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.235 16:18:55 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.235 16:18:55 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.235 16:18:55 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:11.172 Creating new GPT entries in memory. 00:04:11.172 The operation has completed successfully. 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56955 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.172 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.430 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:11.688 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:11.688 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.688 16:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:11.947 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:11.947 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:11.947 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:11.947 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:11.947 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.205 16:18:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.463 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.463 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:12.463 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:12.463 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.463 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.463 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.745 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.745 00:04:12.745 real 0m3.911s 00:04:12.745 user 0m0.649s 00:04:12.745 sys 0m0.992s 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.745 16:18:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:12.745 ************************************ 00:04:12.745 END TEST nvme_mount 00:04:12.745 ************************************ 00:04:13.003 16:18:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:13.003 16:18:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.003 16:18:58 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.003 16:18:58 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.003 16:18:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.003 ************************************ 00:04:13.003 START TEST dm_mount 00:04:13.003 ************************************ 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.003 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.004 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.004 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:13.004 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.004 16:18:58 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:13.936 Creating new GPT entries in memory. 00:04:13.936 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.936 other utilities. 00:04:13.936 16:18:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.936 16:18:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.936 16:18:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.936 16:18:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.936 16:18:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:14.868 Creating new GPT entries in memory. 00:04:14.868 The operation has completed successfully. 00:04:14.868 16:19:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.868 16:19:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.868 16:19:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.868 16:19:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.868 16:19:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:16.264 The operation has completed successfully. 00:04:16.264 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57384 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.265 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.523 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.524 16:19:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:16.783 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:17.041 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:17.041 00:04:17.041 real 0m4.144s 00:04:17.041 user 0m0.427s 00:04:17.041 sys 0m0.681s 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.041 16:19:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:17.041 ************************************ 00:04:17.041 END TEST dm_mount 00:04:17.041 ************************************ 00:04:17.041 16:19:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:17.041 16:19:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:17.041 16:19:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:17.041 16:19:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.041 16:19:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.041 16:19:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:17.041 16:19:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.041 16:19:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.300 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:17.300 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:17.300 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:17.300 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:17.300 16:19:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:17.300 16:19:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:17.300 16:19:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:17.300 16:19:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.300 16:19:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:17.300 16:19:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.300 16:19:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:17.300 00:04:17.300 real 0m9.551s 00:04:17.300 user 0m1.673s 00:04:17.300 sys 0m2.292s 00:04:17.300 16:19:02 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.300 16:19:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:17.300 ************************************ 00:04:17.300 END TEST devices 00:04:17.300 ************************************ 00:04:17.300 16:19:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:17.300 00:04:17.300 real 0m21.100s 00:04:17.300 user 0m6.709s 00:04:17.300 sys 0m8.853s 00:04:17.300 16:19:02 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.300 16:19:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.300 ************************************ 00:04:17.300 END TEST setup.sh 00:04:17.300 ************************************ 00:04:17.558 16:19:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.558 16:19:02 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.124 Hugepages 00:04:18.124 node hugesize free / total 00:04:18.124 node0 1048576kB 0 / 0 00:04:18.124 node0 2048kB 2048 / 2048 00:04:18.124 00:04:18.124 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.124 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.124 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:18.381 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:18.381 16:19:03 -- spdk/autotest.sh@130 -- # uname -s 00:04:18.381 16:19:03 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:18.381 16:19:03 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:18.381 16:19:03 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.946 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.946 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.204 16:19:04 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:20.139 16:19:05 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:20.139 16:19:05 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:20.139 16:19:05 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.139 16:19:05 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:20.139 16:19:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:20.139 16:19:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:20.139 16:19:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.139 16:19:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.139 16:19:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:20.139 16:19:05 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:20.139 16:19:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:20.139 16:19:05 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.656 Waiting for block devices as requested 00:04:20.656 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.656 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.656 16:19:06 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:20.656 16:19:06 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:20.656 16:19:06 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:20.656 16:19:06 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:20.656 16:19:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.656 16:19:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:20.656 16:19:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.656 16:19:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:20.656 16:19:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:20.656 16:19:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:20.656 16:19:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:20.656 16:19:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:20.656 16:19:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:20.656 16:19:06 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:20.656 16:19:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:20.656 16:19:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:20.656 16:19:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:20.656 16:19:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:20.656 16:19:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:20.950 16:19:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:20.951 16:19:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:20.951 16:19:06 -- common/autotest_common.sh@1557 -- # continue 00:04:20.951 16:19:06 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:20.951 16:19:06 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:20.951 16:19:06 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:20.951 16:19:06 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:20.951 16:19:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.951 16:19:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:20.951 16:19:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.951 16:19:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:20.951 16:19:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:20.951 16:19:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:20.951 16:19:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:20.951 16:19:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:20.951 16:19:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:20.951 16:19:06 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:20.951 16:19:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:20.951 16:19:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:20.951 16:19:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:20.951 16:19:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:20.951 16:19:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:20.951 16:19:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:20.951 16:19:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:20.951 16:19:06 -- common/autotest_common.sh@1557 -- # continue 00:04:20.951 16:19:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:20.951 16:19:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:20.951 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:20.951 16:19:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:20.951 16:19:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.951 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:20.951 16:19:06 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.517 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.776 16:19:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:21.776 16:19:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.776 16:19:07 -- common/autotest_common.sh@10 -- # set +x 00:04:21.776 16:19:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:21.776 16:19:07 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:21.776 16:19:07 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:21.776 16:19:07 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:21.776 16:19:07 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:21.776 16:19:07 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:21.776 16:19:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:21.776 16:19:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:21.776 16:19:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.776 16:19:07 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:21.776 16:19:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:21.776 16:19:07 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:21.776 16:19:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:21.776 16:19:07 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:21.776 16:19:07 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:21.776 16:19:07 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:21.776 16:19:07 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.776 16:19:07 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:21.776 16:19:07 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:21.776 16:19:07 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:21.776 16:19:07 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.776 16:19:07 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:21.776 16:19:07 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:21.776 16:19:07 -- common/autotest_common.sh@1593 -- # return 0 00:04:21.776 16:19:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:21.776 16:19:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:21.776 16:19:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:21.776 16:19:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:21.776 16:19:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:21.776 16:19:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.776 16:19:07 -- common/autotest_common.sh@10 -- # set +x 00:04:21.776 16:19:07 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:21.776 16:19:07 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:21.776 16:19:07 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:21.776 16:19:07 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:21.776 16:19:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.776 16:19:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.776 16:19:07 -- common/autotest_common.sh@10 -- # set +x 00:04:21.776 ************************************ 00:04:21.776 START TEST env 00:04:21.776 ************************************ 00:04:21.776 16:19:07 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.035 * Looking for test storage... 00:04:22.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.035 16:19:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.035 16:19:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.035 16:19:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.035 16:19:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.035 ************************************ 00:04:22.035 START TEST env_memory 00:04:22.035 ************************************ 00:04:22.035 16:19:07 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.035 00:04:22.035 00:04:22.035 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.035 http://cunit.sourceforge.net/ 00:04:22.035 00:04:22.035 00:04:22.035 Suite: memory 00:04:22.035 Test: alloc and free memory map ...[2024-07-15 16:19:07.411171] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:22.035 passed 00:04:22.035 Test: mem map translation ...[2024-07-15 16:19:07.442571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:22.035 [2024-07-15 16:19:07.442623] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:22.035 [2024-07-15 16:19:07.442682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:22.035 [2024-07-15 16:19:07.442694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:22.035 passed 00:04:22.035 Test: mem map registration ...[2024-07-15 16:19:07.507269] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:22.035 [2024-07-15 16:19:07.507327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:22.035 passed 00:04:22.294 Test: mem map adjacent registrations ...passed 00:04:22.294 00:04:22.294 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.294 suites 1 1 n/a 0 0 00:04:22.294 tests 4 4 4 0 0 00:04:22.294 asserts 152 152 152 0 n/a 00:04:22.294 00:04:22.294 Elapsed time = 0.222 seconds 00:04:22.294 00:04:22.294 real 0m0.238s 00:04:22.294 user 0m0.219s 00:04:22.294 sys 0m0.017s 00:04:22.294 16:19:07 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.294 ************************************ 00:04:22.294 END TEST env_memory 00:04:22.294 ************************************ 00:04:22.294 16:19:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:22.294 16:19:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:22.294 16:19:07 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:22.294 16:19:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.294 16:19:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.294 16:19:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.294 ************************************ 00:04:22.294 START TEST env_vtophys 00:04:22.294 ************************************ 00:04:22.294 16:19:07 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:22.294 EAL: lib.eal log level changed from notice to debug 00:04:22.294 EAL: Detected lcore 0 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 1 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 2 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 3 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 4 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 5 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 6 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 7 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 8 as core 0 on socket 0 00:04:22.294 EAL: Detected lcore 9 as core 0 on socket 0 00:04:22.294 EAL: Maximum logical cores by configuration: 128 00:04:22.294 EAL: Detected CPU lcores: 10 00:04:22.294 EAL: Detected NUMA nodes: 1 00:04:22.294 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:22.294 EAL: Detected shared linkage of DPDK 00:04:22.294 EAL: No shared files mode enabled, IPC will be disabled 00:04:22.294 EAL: Selected IOVA mode 'PA' 00:04:22.294 EAL: Probing VFIO support... 00:04:22.294 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:22.294 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:22.294 EAL: Ask a virtual area of 0x2e000 bytes 00:04:22.294 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:22.294 EAL: Setting up physically contiguous memory... 00:04:22.294 EAL: Setting maximum number of open files to 524288 00:04:22.294 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:22.294 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:22.294 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.294 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:22.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.294 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.294 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:22.294 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:22.294 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.294 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:22.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.294 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.294 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:22.294 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:22.294 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.294 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:22.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.294 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.294 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:22.294 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:22.294 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.294 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:22.294 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.294 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.294 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:22.294 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:22.294 EAL: Hugepages will be freed exactly as allocated. 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: TSC frequency is ~2200000 KHz 00:04:22.294 EAL: Main lcore 0 is ready (tid=7f5901518a00;cpuset=[0]) 00:04:22.294 EAL: Trying to obtain current memory policy. 00:04:22.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.294 EAL: Restoring previous memory policy: 0 00:04:22.294 EAL: request: mp_malloc_sync 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: Heap on socket 0 was expanded by 2MB 00:04:22.294 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:22.294 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:22.294 EAL: Mem event callback 'spdk:(nil)' registered 00:04:22.294 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:22.294 00:04:22.294 00:04:22.294 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.294 http://cunit.sourceforge.net/ 00:04:22.294 00:04:22.294 00:04:22.294 Suite: components_suite 00:04:22.294 Test: vtophys_malloc_test ...passed 00:04:22.294 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:22.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.294 EAL: Restoring previous memory policy: 4 00:04:22.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.294 EAL: request: mp_malloc_sync 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: Heap on socket 0 was expanded by 4MB 00:04:22.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.294 EAL: request: mp_malloc_sync 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: Heap on socket 0 was shrunk by 4MB 00:04:22.294 EAL: Trying to obtain current memory policy. 00:04:22.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.294 EAL: Restoring previous memory policy: 4 00:04:22.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.294 EAL: request: mp_malloc_sync 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: Heap on socket 0 was expanded by 6MB 00:04:22.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.294 EAL: request: mp_malloc_sync 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: Heap on socket 0 was shrunk by 6MB 00:04:22.294 EAL: Trying to obtain current memory policy. 00:04:22.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.294 EAL: Restoring previous memory policy: 4 00:04:22.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.294 EAL: request: mp_malloc_sync 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: Heap on socket 0 was expanded by 10MB 00:04:22.294 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.294 EAL: request: mp_malloc_sync 00:04:22.294 EAL: No shared files mode enabled, IPC is disabled 00:04:22.294 EAL: Heap on socket 0 was shrunk by 10MB 00:04:22.294 EAL: Trying to obtain current memory policy. 00:04:22.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.294 EAL: Restoring previous memory policy: 4 00:04:22.295 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.295 EAL: request: mp_malloc_sync 00:04:22.295 EAL: No shared files mode enabled, IPC is disabled 00:04:22.295 EAL: Heap on socket 0 was expanded by 18MB 00:04:22.295 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.295 EAL: request: mp_malloc_sync 00:04:22.295 EAL: No shared files mode enabled, IPC is disabled 00:04:22.295 EAL: Heap on socket 0 was shrunk by 18MB 00:04:22.295 EAL: Trying to obtain current memory policy. 00:04:22.295 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.295 EAL: Restoring previous memory policy: 4 00:04:22.295 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.295 EAL: request: mp_malloc_sync 00:04:22.295 EAL: No shared files mode enabled, IPC is disabled 00:04:22.295 EAL: Heap on socket 0 was expanded by 34MB 00:04:22.295 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.295 EAL: request: mp_malloc_sync 00:04:22.295 EAL: No shared files mode enabled, IPC is disabled 00:04:22.295 EAL: Heap on socket 0 was shrunk by 34MB 00:04:22.295 EAL: Trying to obtain current memory policy. 00:04:22.295 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.554 EAL: Restoring previous memory policy: 4 00:04:22.554 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.554 EAL: request: mp_malloc_sync 00:04:22.554 EAL: No shared files mode enabled, IPC is disabled 00:04:22.554 EAL: Heap on socket 0 was expanded by 66MB 00:04:22.554 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.554 EAL: request: mp_malloc_sync 00:04:22.554 EAL: No shared files mode enabled, IPC is disabled 00:04:22.554 EAL: Heap on socket 0 was shrunk by 66MB 00:04:22.554 EAL: Trying to obtain current memory policy. 00:04:22.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.554 EAL: Restoring previous memory policy: 4 00:04:22.554 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.554 EAL: request: mp_malloc_sync 00:04:22.554 EAL: No shared files mode enabled, IPC is disabled 00:04:22.554 EAL: Heap on socket 0 was expanded by 130MB 00:04:22.554 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.554 EAL: request: mp_malloc_sync 00:04:22.554 EAL: No shared files mode enabled, IPC is disabled 00:04:22.554 EAL: Heap on socket 0 was shrunk by 130MB 00:04:22.554 EAL: Trying to obtain current memory policy. 00:04:22.554 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.554 EAL: Restoring previous memory policy: 4 00:04:22.554 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.554 EAL: request: mp_malloc_sync 00:04:22.554 EAL: No shared files mode enabled, IPC is disabled 00:04:22.554 EAL: Heap on socket 0 was expanded by 258MB 00:04:22.554 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.812 EAL: request: mp_malloc_sync 00:04:22.812 EAL: No shared files mode enabled, IPC is disabled 00:04:22.812 EAL: Heap on socket 0 was shrunk by 258MB 00:04:22.813 EAL: Trying to obtain current memory policy. 00:04:22.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.813 EAL: Restoring previous memory policy: 4 00:04:22.813 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.813 EAL: request: mp_malloc_sync 00:04:22.813 EAL: No shared files mode enabled, IPC is disabled 00:04:22.813 EAL: Heap on socket 0 was expanded by 514MB 00:04:22.813 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.071 EAL: request: mp_malloc_sync 00:04:23.071 EAL: No shared files mode enabled, IPC is disabled 00:04:23.072 EAL: Heap on socket 0 was shrunk by 514MB 00:04:23.072 EAL: Trying to obtain current memory policy. 00:04:23.072 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.330 EAL: Restoring previous memory policy: 4 00:04:23.330 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.330 EAL: request: mp_malloc_sync 00:04:23.330 EAL: No shared files mode enabled, IPC is disabled 00:04:23.330 EAL: Heap on socket 0 was expanded by 1026MB 00:04:23.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.589 passed 00:04:23.589 00:04:23.589 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.589 suites 1 1 n/a 0 0 00:04:23.589 tests 2 2 2 0 0 00:04:23.589 asserts 5232 5232 5232 0 n/a 00:04:23.589 00:04:23.589 Elapsed time = 1.291 seconds 00:04:23.589 EAL: request: mp_malloc_sync 00:04:23.589 EAL: No shared files mode enabled, IPC is disabled 00:04:23.589 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:23.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.589 EAL: request: mp_malloc_sync 00:04:23.589 EAL: No shared files mode enabled, IPC is disabled 00:04:23.589 EAL: Heap on socket 0 was shrunk by 2MB 00:04:23.589 EAL: No shared files mode enabled, IPC is disabled 00:04:23.589 EAL: No shared files mode enabled, IPC is disabled 00:04:23.589 EAL: No shared files mode enabled, IPC is disabled 00:04:23.848 00:04:23.848 real 0m1.487s 00:04:23.848 user 0m0.821s 00:04:23.848 sys 0m0.532s 00:04:23.848 16:19:09 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.848 ************************************ 00:04:23.848 END TEST env_vtophys 00:04:23.848 ************************************ 00:04:23.848 16:19:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:23.848 16:19:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:23.848 16:19:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:23.848 16:19:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.848 16:19:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.848 16:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.848 ************************************ 00:04:23.848 START TEST env_pci 00:04:23.848 ************************************ 00:04:23.848 16:19:09 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:23.848 00:04:23.848 00:04:23.848 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.848 http://cunit.sourceforge.net/ 00:04:23.848 00:04:23.848 00:04:23.848 Suite: pci 00:04:23.848 Test: pci_hook ...[2024-07-15 16:19:09.211008] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58577 has claimed it 00:04:23.848 passed 00:04:23.848 00:04:23.848 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.848 suites 1 1 n/a 0 0 00:04:23.848 tests 1 1 1 0 0 00:04:23.848 asserts 25 25 25 0 n/a 00:04:23.848 00:04:23.848 Elapsed time = 0.003 seconds 00:04:23.848 EAL: Cannot find device (10000:00:01.0) 00:04:23.848 EAL: Failed to attach device on primary process 00:04:23.848 ************************************ 00:04:23.848 END TEST env_pci 00:04:23.848 ************************************ 00:04:23.848 00:04:23.848 real 0m0.022s 00:04:23.848 user 0m0.011s 00:04:23.848 sys 0m0.010s 00:04:23.848 16:19:09 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.848 16:19:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:23.848 16:19:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:23.848 16:19:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:23.848 16:19:09 env -- env/env.sh@15 -- # uname 00:04:23.848 16:19:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:23.848 16:19:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:23.848 16:19:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.848 16:19:09 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:23.848 16:19:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.848 16:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.848 ************************************ 00:04:23.848 START TEST env_dpdk_post_init 00:04:23.848 ************************************ 00:04:23.848 16:19:09 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:23.848 EAL: Detected CPU lcores: 10 00:04:23.848 EAL: Detected NUMA nodes: 1 00:04:23.848 EAL: Detected shared linkage of DPDK 00:04:23.848 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:23.848 EAL: Selected IOVA mode 'PA' 00:04:24.107 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.107 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:24.107 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:24.107 Starting DPDK initialization... 00:04:24.107 Starting SPDK post initialization... 00:04:24.107 SPDK NVMe probe 00:04:24.107 Attaching to 0000:00:10.0 00:04:24.107 Attaching to 0000:00:11.0 00:04:24.107 Attached to 0000:00:10.0 00:04:24.107 Attached to 0000:00:11.0 00:04:24.107 Cleaning up... 00:04:24.107 ************************************ 00:04:24.107 END TEST env_dpdk_post_init 00:04:24.107 ************************************ 00:04:24.107 00:04:24.107 real 0m0.181s 00:04:24.107 user 0m0.046s 00:04:24.107 sys 0m0.035s 00:04:24.107 16:19:09 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.107 16:19:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 16:19:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.107 16:19:09 env -- env/env.sh@26 -- # uname 00:04:24.107 16:19:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.107 16:19:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.107 16:19:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.107 16:19:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.107 16:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.107 ************************************ 00:04:24.107 START TEST env_mem_callbacks 00:04:24.107 ************************************ 00:04:24.107 16:19:09 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.107 EAL: Detected CPU lcores: 10 00:04:24.107 EAL: Detected NUMA nodes: 1 00:04:24.107 EAL: Detected shared linkage of DPDK 00:04:24.107 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.107 EAL: Selected IOVA mode 'PA' 00:04:24.107 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.107 00:04:24.107 00:04:24.107 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.107 http://cunit.sourceforge.net/ 00:04:24.107 00:04:24.107 00:04:24.107 Suite: memory 00:04:24.107 Test: test ... 00:04:24.107 register 0x200000200000 2097152 00:04:24.107 malloc 3145728 00:04:24.107 register 0x200000400000 4194304 00:04:24.107 buf 0x200000500000 len 3145728 PASSED 00:04:24.107 malloc 64 00:04:24.107 buf 0x2000004fff40 len 64 PASSED 00:04:24.107 malloc 4194304 00:04:24.365 register 0x200000800000 6291456 00:04:24.365 buf 0x200000a00000 len 4194304 PASSED 00:04:24.365 free 0x200000500000 3145728 00:04:24.365 free 0x2000004fff40 64 00:04:24.365 unregister 0x200000400000 4194304 PASSED 00:04:24.365 free 0x200000a00000 4194304 00:04:24.365 unregister 0x200000800000 6291456 PASSED 00:04:24.365 malloc 8388608 00:04:24.365 register 0x200000400000 10485760 00:04:24.365 buf 0x200000600000 len 8388608 PASSED 00:04:24.365 free 0x200000600000 8388608 00:04:24.365 unregister 0x200000400000 10485760 PASSED 00:04:24.365 passed 00:04:24.365 00:04:24.365 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.365 suites 1 1 n/a 0 0 00:04:24.365 tests 1 1 1 0 0 00:04:24.365 asserts 15 15 15 0 n/a 00:04:24.365 00:04:24.365 Elapsed time = 0.010 seconds 00:04:24.365 ************************************ 00:04:24.365 END TEST env_mem_callbacks 00:04:24.365 ************************************ 00:04:24.365 00:04:24.365 real 0m0.146s 00:04:24.365 user 0m0.021s 00:04:24.365 sys 0m0.022s 00:04:24.365 16:19:09 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.365 16:19:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:24.365 16:19:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:24.365 ************************************ 00:04:24.365 END TEST env 00:04:24.365 ************************************ 00:04:24.365 00:04:24.365 real 0m2.437s 00:04:24.365 user 0m1.255s 00:04:24.365 sys 0m0.826s 00:04:24.365 16:19:09 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.365 16:19:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.365 16:19:09 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.365 16:19:09 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:24.365 16:19:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.365 16:19:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.365 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:24.365 ************************************ 00:04:24.365 START TEST rpc 00:04:24.365 ************************************ 00:04:24.365 16:19:09 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:24.365 * Looking for test storage... 00:04:24.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:24.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.365 16:19:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58681 00:04:24.365 16:19:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.365 16:19:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58681 00:04:24.365 16:19:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:24.365 16:19:09 rpc -- common/autotest_common.sh@829 -- # '[' -z 58681 ']' 00:04:24.365 16:19:09 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.365 16:19:09 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.365 16:19:09 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.365 16:19:09 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.365 16:19:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.624 [2024-07-15 16:19:09.917620] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:24.624 [2024-07-15 16:19:09.917953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58681 ] 00:04:24.624 [2024-07-15 16:19:10.058788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.882 [2024-07-15 16:19:10.182758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:24.882 [2024-07-15 16:19:10.183070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58681' to capture a snapshot of events at runtime. 00:04:24.882 [2024-07-15 16:19:10.183196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:24.882 [2024-07-15 16:19:10.183248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:24.882 [2024-07-15 16:19:10.183277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58681 for offline analysis/debug. 00:04:24.882 [2024-07-15 16:19:10.183425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.882 [2024-07-15 16:19:10.238854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:25.449 16:19:10 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.449 16:19:10 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:25.449 16:19:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.449 16:19:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.449 16:19:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.449 16:19:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.449 16:19:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.449 16:19:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.449 16:19:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.449 ************************************ 00:04:25.449 START TEST rpc_integrity 00:04:25.449 ************************************ 00:04:25.449 16:19:10 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:25.449 16:19:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.449 16:19:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.449 16:19:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.449 16:19:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.449 16:19:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.449 16:19:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.709 { 00:04:25.709 "name": "Malloc0", 00:04:25.709 "aliases": [ 00:04:25.709 "c0c61f32-5907-48e0-96f9-3f355cc9f1a8" 00:04:25.709 ], 00:04:25.709 "product_name": "Malloc disk", 00:04:25.709 "block_size": 512, 00:04:25.709 "num_blocks": 16384, 00:04:25.709 "uuid": "c0c61f32-5907-48e0-96f9-3f355cc9f1a8", 00:04:25.709 "assigned_rate_limits": { 00:04:25.709 "rw_ios_per_sec": 0, 00:04:25.709 "rw_mbytes_per_sec": 0, 00:04:25.709 "r_mbytes_per_sec": 0, 00:04:25.709 "w_mbytes_per_sec": 0 00:04:25.709 }, 00:04:25.709 "claimed": false, 00:04:25.709 "zoned": false, 00:04:25.709 "supported_io_types": { 00:04:25.709 "read": true, 00:04:25.709 "write": true, 00:04:25.709 "unmap": true, 00:04:25.709 "flush": true, 00:04:25.709 "reset": true, 00:04:25.709 "nvme_admin": false, 00:04:25.709 "nvme_io": false, 00:04:25.709 "nvme_io_md": false, 00:04:25.709 "write_zeroes": true, 00:04:25.709 "zcopy": true, 00:04:25.709 "get_zone_info": false, 00:04:25.709 "zone_management": false, 00:04:25.709 "zone_append": false, 00:04:25.709 "compare": false, 00:04:25.709 "compare_and_write": false, 00:04:25.709 "abort": true, 00:04:25.709 "seek_hole": false, 00:04:25.709 "seek_data": false, 00:04:25.709 "copy": true, 00:04:25.709 "nvme_iov_md": false 00:04:25.709 }, 00:04:25.709 "memory_domains": [ 00:04:25.709 { 00:04:25.709 "dma_device_id": "system", 00:04:25.709 "dma_device_type": 1 00:04:25.709 }, 00:04:25.709 { 00:04:25.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.709 "dma_device_type": 2 00:04:25.709 } 00:04:25.709 ], 00:04:25.709 "driver_specific": {} 00:04:25.709 } 00:04:25.709 ]' 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.709 [2024-07-15 16:19:11.098499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:25.709 [2024-07-15 16:19:11.098565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.709 [2024-07-15 16:19:11.098586] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ebda0 00:04:25.709 [2024-07-15 16:19:11.098595] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.709 [2024-07-15 16:19:11.100418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.709 [2024-07-15 16:19:11.100453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.709 Passthru0 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.709 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.709 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.709 { 00:04:25.709 "name": "Malloc0", 00:04:25.709 "aliases": [ 00:04:25.709 "c0c61f32-5907-48e0-96f9-3f355cc9f1a8" 00:04:25.709 ], 00:04:25.709 "product_name": "Malloc disk", 00:04:25.709 "block_size": 512, 00:04:25.709 "num_blocks": 16384, 00:04:25.709 "uuid": "c0c61f32-5907-48e0-96f9-3f355cc9f1a8", 00:04:25.709 "assigned_rate_limits": { 00:04:25.709 "rw_ios_per_sec": 0, 00:04:25.709 "rw_mbytes_per_sec": 0, 00:04:25.709 "r_mbytes_per_sec": 0, 00:04:25.709 "w_mbytes_per_sec": 0 00:04:25.709 }, 00:04:25.709 "claimed": true, 00:04:25.709 "claim_type": "exclusive_write", 00:04:25.709 "zoned": false, 00:04:25.709 "supported_io_types": { 00:04:25.709 "read": true, 00:04:25.709 "write": true, 00:04:25.709 "unmap": true, 00:04:25.709 "flush": true, 00:04:25.709 "reset": true, 00:04:25.709 "nvme_admin": false, 00:04:25.709 "nvme_io": false, 00:04:25.709 "nvme_io_md": false, 00:04:25.709 "write_zeroes": true, 00:04:25.709 "zcopy": true, 00:04:25.709 "get_zone_info": false, 00:04:25.709 "zone_management": false, 00:04:25.709 "zone_append": false, 00:04:25.709 "compare": false, 00:04:25.709 "compare_and_write": false, 00:04:25.709 "abort": true, 00:04:25.709 "seek_hole": false, 00:04:25.709 "seek_data": false, 00:04:25.709 "copy": true, 00:04:25.709 "nvme_iov_md": false 00:04:25.709 }, 00:04:25.709 "memory_domains": [ 00:04:25.709 { 00:04:25.709 "dma_device_id": "system", 00:04:25.709 "dma_device_type": 1 00:04:25.709 }, 00:04:25.709 { 00:04:25.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.710 "dma_device_type": 2 00:04:25.710 } 00:04:25.710 ], 00:04:25.710 "driver_specific": {} 00:04:25.710 }, 00:04:25.710 { 00:04:25.710 "name": "Passthru0", 00:04:25.710 "aliases": [ 00:04:25.710 "5ef645be-0b5b-538c-adfb-52825f739e13" 00:04:25.710 ], 00:04:25.710 "product_name": "passthru", 00:04:25.710 "block_size": 512, 00:04:25.710 "num_blocks": 16384, 00:04:25.710 "uuid": "5ef645be-0b5b-538c-adfb-52825f739e13", 00:04:25.710 "assigned_rate_limits": { 00:04:25.710 "rw_ios_per_sec": 0, 00:04:25.710 "rw_mbytes_per_sec": 0, 00:04:25.710 "r_mbytes_per_sec": 0, 00:04:25.710 "w_mbytes_per_sec": 0 00:04:25.710 }, 00:04:25.710 "claimed": false, 00:04:25.710 "zoned": false, 00:04:25.710 "supported_io_types": { 00:04:25.710 "read": true, 00:04:25.710 "write": true, 00:04:25.710 "unmap": true, 00:04:25.710 "flush": true, 00:04:25.710 "reset": true, 00:04:25.710 "nvme_admin": false, 00:04:25.710 "nvme_io": false, 00:04:25.710 "nvme_io_md": false, 00:04:25.710 "write_zeroes": true, 00:04:25.710 "zcopy": true, 00:04:25.710 "get_zone_info": false, 00:04:25.710 "zone_management": false, 00:04:25.710 "zone_append": false, 00:04:25.710 "compare": false, 00:04:25.710 "compare_and_write": false, 00:04:25.710 "abort": true, 00:04:25.710 "seek_hole": false, 00:04:25.710 "seek_data": false, 00:04:25.710 "copy": true, 00:04:25.710 "nvme_iov_md": false 00:04:25.710 }, 00:04:25.710 "memory_domains": [ 00:04:25.710 { 00:04:25.710 "dma_device_id": "system", 00:04:25.710 "dma_device_type": 1 00:04:25.710 }, 00:04:25.710 { 00:04:25.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.710 "dma_device_type": 2 00:04:25.710 } 00:04:25.710 ], 00:04:25.710 "driver_specific": { 00:04:25.710 "passthru": { 00:04:25.710 "name": "Passthru0", 00:04:25.710 "base_bdev_name": "Malloc0" 00:04:25.710 } 00:04:25.710 } 00:04:25.710 } 00:04:25.710 ]' 00:04:25.710 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.710 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.710 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.710 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.710 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.710 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.710 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.710 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.969 ************************************ 00:04:25.969 END TEST rpc_integrity 00:04:25.969 ************************************ 00:04:25.969 16:19:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.969 00:04:25.969 real 0m0.341s 00:04:25.969 user 0m0.224s 00:04:25.969 sys 0m0.048s 00:04:25.969 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.969 16:19:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 16:19:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.969 16:19:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:25.969 16:19:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.969 16:19:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.969 16:19:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 ************************************ 00:04:25.969 START TEST rpc_plugins 00:04:25.969 ************************************ 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:25.969 { 00:04:25.969 "name": "Malloc1", 00:04:25.969 "aliases": [ 00:04:25.969 "ff3c8798-d832-472a-8534-755e760751d2" 00:04:25.969 ], 00:04:25.969 "product_name": "Malloc disk", 00:04:25.969 "block_size": 4096, 00:04:25.969 "num_blocks": 256, 00:04:25.969 "uuid": "ff3c8798-d832-472a-8534-755e760751d2", 00:04:25.969 "assigned_rate_limits": { 00:04:25.969 "rw_ios_per_sec": 0, 00:04:25.969 "rw_mbytes_per_sec": 0, 00:04:25.969 "r_mbytes_per_sec": 0, 00:04:25.969 "w_mbytes_per_sec": 0 00:04:25.969 }, 00:04:25.969 "claimed": false, 00:04:25.969 "zoned": false, 00:04:25.969 "supported_io_types": { 00:04:25.969 "read": true, 00:04:25.969 "write": true, 00:04:25.969 "unmap": true, 00:04:25.969 "flush": true, 00:04:25.969 "reset": true, 00:04:25.969 "nvme_admin": false, 00:04:25.969 "nvme_io": false, 00:04:25.969 "nvme_io_md": false, 00:04:25.969 "write_zeroes": true, 00:04:25.969 "zcopy": true, 00:04:25.969 "get_zone_info": false, 00:04:25.969 "zone_management": false, 00:04:25.969 "zone_append": false, 00:04:25.969 "compare": false, 00:04:25.969 "compare_and_write": false, 00:04:25.969 "abort": true, 00:04:25.969 "seek_hole": false, 00:04:25.969 "seek_data": false, 00:04:25.969 "copy": true, 00:04:25.969 "nvme_iov_md": false 00:04:25.969 }, 00:04:25.969 "memory_domains": [ 00:04:25.969 { 00:04:25.969 "dma_device_id": "system", 00:04:25.969 "dma_device_type": 1 00:04:25.969 }, 00:04:25.969 { 00:04:25.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.969 "dma_device_type": 2 00:04:25.969 } 00:04:25.969 ], 00:04:25.969 "driver_specific": {} 00:04:25.969 } 00:04:25.969 ]' 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:25.969 ************************************ 00:04:25.969 END TEST rpc_plugins 00:04:25.969 ************************************ 00:04:25.969 16:19:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:25.969 00:04:25.969 real 0m0.164s 00:04:25.969 user 0m0.106s 00:04:25.969 sys 0m0.021s 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.969 16:19:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.228 16:19:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.228 16:19:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:26.228 16:19:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.228 16:19:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.228 16:19:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.228 ************************************ 00:04:26.228 START TEST rpc_trace_cmd_test 00:04:26.228 ************************************ 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:26.228 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58681", 00:04:26.228 "tpoint_group_mask": "0x8", 00:04:26.228 "iscsi_conn": { 00:04:26.228 "mask": "0x2", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "scsi": { 00:04:26.228 "mask": "0x4", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "bdev": { 00:04:26.228 "mask": "0x8", 00:04:26.228 "tpoint_mask": "0xffffffffffffffff" 00:04:26.228 }, 00:04:26.228 "nvmf_rdma": { 00:04:26.228 "mask": "0x10", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "nvmf_tcp": { 00:04:26.228 "mask": "0x20", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "ftl": { 00:04:26.228 "mask": "0x40", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "blobfs": { 00:04:26.228 "mask": "0x80", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "dsa": { 00:04:26.228 "mask": "0x200", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "thread": { 00:04:26.228 "mask": "0x400", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "nvme_pcie": { 00:04:26.228 "mask": "0x800", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "iaa": { 00:04:26.228 "mask": "0x1000", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "nvme_tcp": { 00:04:26.228 "mask": "0x2000", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "bdev_nvme": { 00:04:26.228 "mask": "0x4000", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 }, 00:04:26.228 "sock": { 00:04:26.228 "mask": "0x8000", 00:04:26.228 "tpoint_mask": "0x0" 00:04:26.228 } 00:04:26.228 }' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.228 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.487 ************************************ 00:04:26.487 END TEST rpc_trace_cmd_test 00:04:26.487 ************************************ 00:04:26.487 16:19:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.487 00:04:26.487 real 0m0.264s 00:04:26.487 user 0m0.221s 00:04:26.487 sys 0m0.032s 00:04:26.487 16:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.487 16:19:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 16:19:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.487 16:19:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.487 16:19:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.487 16:19:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.487 16:19:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.487 16:19:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.487 16:19:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 ************************************ 00:04:26.487 START TEST rpc_daemon_integrity 00:04:26.487 ************************************ 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.487 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.487 { 00:04:26.487 "name": "Malloc2", 00:04:26.487 "aliases": [ 00:04:26.487 "da511993-0c2d-419d-a3ce-e8a3978444e2" 00:04:26.487 ], 00:04:26.487 "product_name": "Malloc disk", 00:04:26.487 "block_size": 512, 00:04:26.487 "num_blocks": 16384, 00:04:26.487 "uuid": "da511993-0c2d-419d-a3ce-e8a3978444e2", 00:04:26.487 "assigned_rate_limits": { 00:04:26.487 "rw_ios_per_sec": 0, 00:04:26.487 "rw_mbytes_per_sec": 0, 00:04:26.487 "r_mbytes_per_sec": 0, 00:04:26.487 "w_mbytes_per_sec": 0 00:04:26.487 }, 00:04:26.487 "claimed": false, 00:04:26.487 "zoned": false, 00:04:26.487 "supported_io_types": { 00:04:26.487 "read": true, 00:04:26.487 "write": true, 00:04:26.487 "unmap": true, 00:04:26.487 "flush": true, 00:04:26.487 "reset": true, 00:04:26.487 "nvme_admin": false, 00:04:26.487 "nvme_io": false, 00:04:26.487 "nvme_io_md": false, 00:04:26.487 "write_zeroes": true, 00:04:26.487 "zcopy": true, 00:04:26.487 "get_zone_info": false, 00:04:26.487 "zone_management": false, 00:04:26.487 "zone_append": false, 00:04:26.487 "compare": false, 00:04:26.487 "compare_and_write": false, 00:04:26.487 "abort": true, 00:04:26.487 "seek_hole": false, 00:04:26.487 "seek_data": false, 00:04:26.487 "copy": true, 00:04:26.487 "nvme_iov_md": false 00:04:26.487 }, 00:04:26.487 "memory_domains": [ 00:04:26.487 { 00:04:26.487 "dma_device_id": "system", 00:04:26.487 "dma_device_type": 1 00:04:26.487 }, 00:04:26.487 { 00:04:26.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.487 "dma_device_type": 2 00:04:26.487 } 00:04:26.487 ], 00:04:26.487 "driver_specific": {} 00:04:26.487 } 00:04:26.487 ]' 00:04:26.488 16:19:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.488 [2024-07-15 16:19:12.015724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.488 [2024-07-15 16:19:12.015787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.488 [2024-07-15 16:19:12.015809] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1850be0 00:04:26.488 [2024-07-15 16:19:12.015818] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.488 [2024-07-15 16:19:12.017629] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.488 [2024-07-15 16:19:12.017660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.488 Passthru0 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.488 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.747 { 00:04:26.747 "name": "Malloc2", 00:04:26.747 "aliases": [ 00:04:26.747 "da511993-0c2d-419d-a3ce-e8a3978444e2" 00:04:26.747 ], 00:04:26.747 "product_name": "Malloc disk", 00:04:26.747 "block_size": 512, 00:04:26.747 "num_blocks": 16384, 00:04:26.747 "uuid": "da511993-0c2d-419d-a3ce-e8a3978444e2", 00:04:26.747 "assigned_rate_limits": { 00:04:26.747 "rw_ios_per_sec": 0, 00:04:26.747 "rw_mbytes_per_sec": 0, 00:04:26.747 "r_mbytes_per_sec": 0, 00:04:26.747 "w_mbytes_per_sec": 0 00:04:26.747 }, 00:04:26.747 "claimed": true, 00:04:26.747 "claim_type": "exclusive_write", 00:04:26.747 "zoned": false, 00:04:26.747 "supported_io_types": { 00:04:26.747 "read": true, 00:04:26.747 "write": true, 00:04:26.747 "unmap": true, 00:04:26.747 "flush": true, 00:04:26.747 "reset": true, 00:04:26.747 "nvme_admin": false, 00:04:26.747 "nvme_io": false, 00:04:26.747 "nvme_io_md": false, 00:04:26.747 "write_zeroes": true, 00:04:26.747 "zcopy": true, 00:04:26.747 "get_zone_info": false, 00:04:26.747 "zone_management": false, 00:04:26.747 "zone_append": false, 00:04:26.747 "compare": false, 00:04:26.747 "compare_and_write": false, 00:04:26.747 "abort": true, 00:04:26.747 "seek_hole": false, 00:04:26.747 "seek_data": false, 00:04:26.747 "copy": true, 00:04:26.747 "nvme_iov_md": false 00:04:26.747 }, 00:04:26.747 "memory_domains": [ 00:04:26.747 { 00:04:26.747 "dma_device_id": "system", 00:04:26.747 "dma_device_type": 1 00:04:26.747 }, 00:04:26.747 { 00:04:26.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.747 "dma_device_type": 2 00:04:26.747 } 00:04:26.747 ], 00:04:26.747 "driver_specific": {} 00:04:26.747 }, 00:04:26.747 { 00:04:26.747 "name": "Passthru0", 00:04:26.747 "aliases": [ 00:04:26.747 "b0820598-671f-58f8-ade9-6d5aba4d475a" 00:04:26.747 ], 00:04:26.747 "product_name": "passthru", 00:04:26.747 "block_size": 512, 00:04:26.747 "num_blocks": 16384, 00:04:26.747 "uuid": "b0820598-671f-58f8-ade9-6d5aba4d475a", 00:04:26.747 "assigned_rate_limits": { 00:04:26.747 "rw_ios_per_sec": 0, 00:04:26.747 "rw_mbytes_per_sec": 0, 00:04:26.747 "r_mbytes_per_sec": 0, 00:04:26.747 "w_mbytes_per_sec": 0 00:04:26.747 }, 00:04:26.747 "claimed": false, 00:04:26.747 "zoned": false, 00:04:26.747 "supported_io_types": { 00:04:26.747 "read": true, 00:04:26.747 "write": true, 00:04:26.747 "unmap": true, 00:04:26.747 "flush": true, 00:04:26.747 "reset": true, 00:04:26.747 "nvme_admin": false, 00:04:26.747 "nvme_io": false, 00:04:26.747 "nvme_io_md": false, 00:04:26.747 "write_zeroes": true, 00:04:26.747 "zcopy": true, 00:04:26.747 "get_zone_info": false, 00:04:26.747 "zone_management": false, 00:04:26.747 "zone_append": false, 00:04:26.747 "compare": false, 00:04:26.747 "compare_and_write": false, 00:04:26.747 "abort": true, 00:04:26.747 "seek_hole": false, 00:04:26.747 "seek_data": false, 00:04:26.747 "copy": true, 00:04:26.747 "nvme_iov_md": false 00:04:26.747 }, 00:04:26.747 "memory_domains": [ 00:04:26.747 { 00:04:26.747 "dma_device_id": "system", 00:04:26.747 "dma_device_type": 1 00:04:26.747 }, 00:04:26.747 { 00:04:26.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.747 "dma_device_type": 2 00:04:26.747 } 00:04:26.747 ], 00:04:26.747 "driver_specific": { 00:04:26.747 "passthru": { 00:04:26.747 "name": "Passthru0", 00:04:26.747 "base_bdev_name": "Malloc2" 00:04:26.747 } 00:04:26.747 } 00:04:26.747 } 00:04:26.747 ]' 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.747 ************************************ 00:04:26.747 END TEST rpc_daemon_integrity 00:04:26.747 ************************************ 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.747 00:04:26.747 real 0m0.311s 00:04:26.747 user 0m0.187s 00:04:26.747 sys 0m0.053s 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.747 16:19:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.747 16:19:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.747 16:19:12 rpc -- rpc/rpc.sh@84 -- # killprocess 58681 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@948 -- # '[' -z 58681 ']' 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@952 -- # kill -0 58681 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@953 -- # uname 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58681 00:04:26.747 killing process with pid 58681 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58681' 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@967 -- # kill 58681 00:04:26.747 16:19:12 rpc -- common/autotest_common.sh@972 -- # wait 58681 00:04:27.327 ************************************ 00:04:27.327 END TEST rpc 00:04:27.328 ************************************ 00:04:27.328 00:04:27.328 real 0m2.861s 00:04:27.328 user 0m3.700s 00:04:27.328 sys 0m0.699s 00:04:27.328 16:19:12 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.328 16:19:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.328 16:19:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.328 16:19:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.328 16:19:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.328 16:19:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.328 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:04:27.328 ************************************ 00:04:27.328 START TEST skip_rpc 00:04:27.328 ************************************ 00:04:27.328 16:19:12 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.328 * Looking for test storage... 00:04:27.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.328 16:19:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.328 16:19:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:27.328 16:19:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.328 16:19:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.328 16:19:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.328 16:19:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.328 ************************************ 00:04:27.328 START TEST skip_rpc 00:04:27.328 ************************************ 00:04:27.328 16:19:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:27.328 16:19:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58879 00:04:27.328 16:19:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.328 16:19:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.328 16:19:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.328 [2024-07-15 16:19:12.816184] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:27.328 [2024-07-15 16:19:12.816267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:04:27.598 [2024-07-15 16:19:12.948982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.598 [2024-07-15 16:19:13.055134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.598 [2024-07-15 16:19:13.111896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:32.895 16:19:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58879 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58879 ']' 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58879 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58879 00:04:32.896 killing process with pid 58879 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58879' 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58879 00:04:32.896 16:19:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58879 00:04:32.896 00:04:32.896 real 0m5.441s 00:04:32.896 user 0m5.074s 00:04:32.896 sys 0m0.275s 00:04:32.896 16:19:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.896 ************************************ 00:04:32.896 END TEST skip_rpc 00:04:32.896 16:19:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.896 ************************************ 00:04:32.896 16:19:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:32.896 16:19:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.896 16:19:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.896 16:19:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.896 16:19:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.896 ************************************ 00:04:32.896 START TEST skip_rpc_with_json 00:04:32.896 ************************************ 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58965 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58965 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 58965 ']' 00:04:32.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.896 16:19:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.896 [2024-07-15 16:19:18.312826] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:32.896 [2024-07-15 16:19:18.312956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:04:33.153 [2024-07-15 16:19:18.448573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.153 [2024-07-15 16:19:18.561896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.153 [2024-07-15 16:19:18.617283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.089 [2024-07-15 16:19:19.349908] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.089 request: 00:04:34.089 { 00:04:34.089 "trtype": "tcp", 00:04:34.089 "method": "nvmf_get_transports", 00:04:34.089 "req_id": 1 00:04:34.089 } 00:04:34.089 Got JSON-RPC error response 00:04:34.089 response: 00:04:34.089 { 00:04:34.089 "code": -19, 00:04:34.089 "message": "No such device" 00:04:34.089 } 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.089 [2024-07-15 16:19:19.362029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.089 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.089 { 00:04:34.089 "subsystems": [ 00:04:34.089 { 00:04:34.089 "subsystem": "keyring", 00:04:34.089 "config": [] 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "subsystem": "iobuf", 00:04:34.089 "config": [ 00:04:34.089 { 00:04:34.089 "method": "iobuf_set_options", 00:04:34.089 "params": { 00:04:34.089 "small_pool_count": 8192, 00:04:34.089 "large_pool_count": 1024, 00:04:34.089 "small_bufsize": 8192, 00:04:34.089 "large_bufsize": 135168 00:04:34.089 } 00:04:34.089 } 00:04:34.089 ] 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "subsystem": "sock", 00:04:34.089 "config": [ 00:04:34.089 { 00:04:34.089 "method": "sock_set_default_impl", 00:04:34.089 "params": { 00:04:34.089 "impl_name": "uring" 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "sock_impl_set_options", 00:04:34.089 "params": { 00:04:34.089 "impl_name": "ssl", 00:04:34.089 "recv_buf_size": 4096, 00:04:34.089 "send_buf_size": 4096, 00:04:34.089 "enable_recv_pipe": true, 00:04:34.089 "enable_quickack": false, 00:04:34.089 "enable_placement_id": 0, 00:04:34.089 "enable_zerocopy_send_server": true, 00:04:34.089 "enable_zerocopy_send_client": false, 00:04:34.089 "zerocopy_threshold": 0, 00:04:34.089 "tls_version": 0, 00:04:34.089 "enable_ktls": false 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "sock_impl_set_options", 00:04:34.089 "params": { 00:04:34.089 "impl_name": "posix", 00:04:34.089 "recv_buf_size": 2097152, 00:04:34.089 "send_buf_size": 2097152, 00:04:34.089 "enable_recv_pipe": true, 00:04:34.089 "enable_quickack": false, 00:04:34.089 "enable_placement_id": 0, 00:04:34.089 "enable_zerocopy_send_server": true, 00:04:34.089 "enable_zerocopy_send_client": false, 00:04:34.089 "zerocopy_threshold": 0, 00:04:34.089 "tls_version": 0, 00:04:34.089 "enable_ktls": false 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "sock_impl_set_options", 00:04:34.089 "params": { 00:04:34.089 "impl_name": "uring", 00:04:34.089 "recv_buf_size": 2097152, 00:04:34.089 "send_buf_size": 2097152, 00:04:34.089 "enable_recv_pipe": true, 00:04:34.089 "enable_quickack": false, 00:04:34.089 "enable_placement_id": 0, 00:04:34.089 "enable_zerocopy_send_server": false, 00:04:34.089 "enable_zerocopy_send_client": false, 00:04:34.089 "zerocopy_threshold": 0, 00:04:34.089 "tls_version": 0, 00:04:34.089 "enable_ktls": false 00:04:34.089 } 00:04:34.089 } 00:04:34.089 ] 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "subsystem": "vmd", 00:04:34.089 "config": [] 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "subsystem": "accel", 00:04:34.089 "config": [ 00:04:34.089 { 00:04:34.089 "method": "accel_set_options", 00:04:34.089 "params": { 00:04:34.089 "small_cache_size": 128, 00:04:34.089 "large_cache_size": 16, 00:04:34.089 "task_count": 2048, 00:04:34.089 "sequence_count": 2048, 00:04:34.089 "buf_count": 2048 00:04:34.089 } 00:04:34.089 } 00:04:34.089 ] 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "subsystem": "bdev", 00:04:34.089 "config": [ 00:04:34.089 { 00:04:34.089 "method": "bdev_set_options", 00:04:34.089 "params": { 00:04:34.089 "bdev_io_pool_size": 65535, 00:04:34.089 "bdev_io_cache_size": 256, 00:04:34.089 "bdev_auto_examine": true, 00:04:34.089 "iobuf_small_cache_size": 128, 00:04:34.089 "iobuf_large_cache_size": 16 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "bdev_raid_set_options", 00:04:34.089 "params": { 00:04:34.089 "process_window_size_kb": 1024 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "bdev_iscsi_set_options", 00:04:34.089 "params": { 00:04:34.089 "timeout_sec": 30 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "bdev_nvme_set_options", 00:04:34.089 "params": { 00:04:34.089 "action_on_timeout": "none", 00:04:34.089 "timeout_us": 0, 00:04:34.089 "timeout_admin_us": 0, 00:04:34.089 "keep_alive_timeout_ms": 10000, 00:04:34.089 "arbitration_burst": 0, 00:04:34.089 "low_priority_weight": 0, 00:04:34.089 "medium_priority_weight": 0, 00:04:34.089 "high_priority_weight": 0, 00:04:34.089 "nvme_adminq_poll_period_us": 10000, 00:04:34.089 "nvme_ioq_poll_period_us": 0, 00:04:34.089 "io_queue_requests": 0, 00:04:34.089 "delay_cmd_submit": true, 00:04:34.089 "transport_retry_count": 4, 00:04:34.089 "bdev_retry_count": 3, 00:04:34.089 "transport_ack_timeout": 0, 00:04:34.089 "ctrlr_loss_timeout_sec": 0, 00:04:34.089 "reconnect_delay_sec": 0, 00:04:34.089 "fast_io_fail_timeout_sec": 0, 00:04:34.089 "disable_auto_failback": false, 00:04:34.089 "generate_uuids": false, 00:04:34.089 "transport_tos": 0, 00:04:34.089 "nvme_error_stat": false, 00:04:34.089 "rdma_srq_size": 0, 00:04:34.089 "io_path_stat": false, 00:04:34.089 "allow_accel_sequence": false, 00:04:34.089 "rdma_max_cq_size": 0, 00:04:34.089 "rdma_cm_event_timeout_ms": 0, 00:04:34.089 "dhchap_digests": [ 00:04:34.089 "sha256", 00:04:34.089 "sha384", 00:04:34.089 "sha512" 00:04:34.089 ], 00:04:34.089 "dhchap_dhgroups": [ 00:04:34.089 "null", 00:04:34.089 "ffdhe2048", 00:04:34.089 "ffdhe3072", 00:04:34.089 "ffdhe4096", 00:04:34.089 "ffdhe6144", 00:04:34.089 "ffdhe8192" 00:04:34.089 ] 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "bdev_nvme_set_hotplug", 00:04:34.089 "params": { 00:04:34.089 "period_us": 100000, 00:04:34.089 "enable": false 00:04:34.089 } 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "method": "bdev_wait_for_examine" 00:04:34.089 } 00:04:34.089 ] 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "subsystem": "scsi", 00:04:34.089 "config": null 00:04:34.089 }, 00:04:34.089 { 00:04:34.089 "subsystem": "scheduler", 00:04:34.089 "config": [ 00:04:34.089 { 00:04:34.089 "method": "framework_set_scheduler", 00:04:34.089 "params": { 00:04:34.089 "name": "static" 00:04:34.089 } 00:04:34.090 } 00:04:34.090 ] 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "subsystem": "vhost_scsi", 00:04:34.090 "config": [] 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "subsystem": "vhost_blk", 00:04:34.090 "config": [] 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "subsystem": "ublk", 00:04:34.090 "config": [] 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "subsystem": "nbd", 00:04:34.090 "config": [] 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "subsystem": "nvmf", 00:04:34.090 "config": [ 00:04:34.090 { 00:04:34.090 "method": "nvmf_set_config", 00:04:34.090 "params": { 00:04:34.090 "discovery_filter": "match_any", 00:04:34.090 "admin_cmd_passthru": { 00:04:34.090 "identify_ctrlr": false 00:04:34.090 } 00:04:34.090 } 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "method": "nvmf_set_max_subsystems", 00:04:34.090 "params": { 00:04:34.090 "max_subsystems": 1024 00:04:34.090 } 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "method": "nvmf_set_crdt", 00:04:34.090 "params": { 00:04:34.090 "crdt1": 0, 00:04:34.090 "crdt2": 0, 00:04:34.090 "crdt3": 0 00:04:34.090 } 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "method": "nvmf_create_transport", 00:04:34.090 "params": { 00:04:34.090 "trtype": "TCP", 00:04:34.090 "max_queue_depth": 128, 00:04:34.090 "max_io_qpairs_per_ctrlr": 127, 00:04:34.090 "in_capsule_data_size": 4096, 00:04:34.090 "max_io_size": 131072, 00:04:34.090 "io_unit_size": 131072, 00:04:34.090 "max_aq_depth": 128, 00:04:34.090 "num_shared_buffers": 511, 00:04:34.090 "buf_cache_size": 4294967295, 00:04:34.090 "dif_insert_or_strip": false, 00:04:34.090 "zcopy": false, 00:04:34.090 "c2h_success": true, 00:04:34.090 "sock_priority": 0, 00:04:34.090 "abort_timeout_sec": 1, 00:04:34.090 "ack_timeout": 0, 00:04:34.090 "data_wr_pool_size": 0 00:04:34.090 } 00:04:34.090 } 00:04:34.090 ] 00:04:34.090 }, 00:04:34.090 { 00:04:34.090 "subsystem": "iscsi", 00:04:34.090 "config": [ 00:04:34.090 { 00:04:34.090 "method": "iscsi_set_options", 00:04:34.090 "params": { 00:04:34.090 "node_base": "iqn.2016-06.io.spdk", 00:04:34.090 "max_sessions": 128, 00:04:34.090 "max_connections_per_session": 2, 00:04:34.090 "max_queue_depth": 64, 00:04:34.090 "default_time2wait": 2, 00:04:34.090 "default_time2retain": 20, 00:04:34.090 "first_burst_length": 8192, 00:04:34.090 "immediate_data": true, 00:04:34.090 "allow_duplicated_isid": false, 00:04:34.090 "error_recovery_level": 0, 00:04:34.090 "nop_timeout": 60, 00:04:34.090 "nop_in_interval": 30, 00:04:34.090 "disable_chap": false, 00:04:34.090 "require_chap": false, 00:04:34.090 "mutual_chap": false, 00:04:34.090 "chap_group": 0, 00:04:34.090 "max_large_datain_per_connection": 64, 00:04:34.090 "max_r2t_per_connection": 4, 00:04:34.090 "pdu_pool_size": 36864, 00:04:34.090 "immediate_data_pool_size": 16384, 00:04:34.090 "data_out_pool_size": 2048 00:04:34.090 } 00:04:34.090 } 00:04:34.090 ] 00:04:34.090 } 00:04:34.090 ] 00:04:34.090 } 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58965 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58965 ']' 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58965 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58965 00:04:34.090 killing process with pid 58965 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58965' 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58965 00:04:34.090 16:19:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58965 00:04:34.659 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58993 00:04:34.659 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.659 16:19:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.959 16:19:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58993 00:04:39.959 16:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58993 ']' 00:04:39.959 16:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58993 00:04:39.959 16:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:39.959 16:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.959 16:19:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58993 00:04:39.959 killing process with pid 58993 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58993' 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58993 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58993 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.959 ************************************ 00:04:39.959 END TEST skip_rpc_with_json 00:04:39.959 ************************************ 00:04:39.959 00:04:39.959 real 0m7.129s 00:04:39.959 user 0m6.898s 00:04:39.959 sys 0m0.658s 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.959 16:19:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.959 16:19:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.959 16:19:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.959 16:19:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.959 16:19:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.959 ************************************ 00:04:39.959 START TEST skip_rpc_with_delay 00:04:39.959 ************************************ 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.959 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.960 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.960 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.960 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.960 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.960 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.960 [2024-07-15 16:19:25.497210] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.960 [2024-07-15 16:19:25.497332] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:40.217 ************************************ 00:04:40.217 END TEST skip_rpc_with_delay 00:04:40.217 ************************************ 00:04:40.217 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:40.217 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.217 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:40.217 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.217 00:04:40.217 real 0m0.076s 00:04:40.217 user 0m0.043s 00:04:40.217 sys 0m0.031s 00:04:40.217 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.217 16:19:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.217 16:19:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.217 16:19:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.217 16:19:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.217 16:19:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.217 16:19:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.217 16:19:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.217 16:19:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.217 ************************************ 00:04:40.217 START TEST exit_on_failed_rpc_init 00:04:40.217 ************************************ 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59108 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59108 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59108 ']' 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.217 16:19:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.217 [2024-07-15 16:19:25.627883] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:40.217 [2024-07-15 16:19:25.627968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59108 ] 00:04:40.217 [2024-07-15 16:19:25.764406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.475 [2024-07-15 16:19:25.866717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.475 [2024-07-15 16:19:25.920990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.755 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.755 [2024-07-15 16:19:26.183381] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:40.755 [2024-07-15 16:19:26.183483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59113 ] 00:04:41.013 [2024-07-15 16:19:26.322050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.013 [2024-07-15 16:19:26.436739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.013 [2024-07-15 16:19:26.436874] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.013 [2024-07-15 16:19:26.436905] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.013 [2024-07-15 16:19:26.436915] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59108 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59108 ']' 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59108 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59108 00:04:41.013 killing process with pid 59108 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59108' 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59108 00:04:41.013 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59108 00:04:41.579 ************************************ 00:04:41.579 END TEST exit_on_failed_rpc_init 00:04:41.579 ************************************ 00:04:41.579 00:04:41.579 real 0m1.381s 00:04:41.579 user 0m1.542s 00:04:41.579 sys 0m0.392s 00:04:41.579 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.579 16:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.579 16:19:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.579 16:19:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.579 ************************************ 00:04:41.579 END TEST skip_rpc 00:04:41.579 ************************************ 00:04:41.579 00:04:41.579 real 0m14.319s 00:04:41.579 user 0m13.657s 00:04:41.579 sys 0m1.533s 00:04:41.579 16:19:26 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.579 16:19:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.579 16:19:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.579 16:19:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.579 16:19:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.579 16:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.579 16:19:27 -- common/autotest_common.sh@10 -- # set +x 00:04:41.579 ************************************ 00:04:41.579 START TEST rpc_client 00:04:41.579 ************************************ 00:04:41.579 16:19:27 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.579 * Looking for test storage... 00:04:41.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:41.579 16:19:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:41.838 OK 00:04:41.838 16:19:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.838 00:04:41.838 real 0m0.100s 00:04:41.838 user 0m0.045s 00:04:41.838 sys 0m0.061s 00:04:41.838 16:19:27 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.838 16:19:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 ************************************ 00:04:41.838 END TEST rpc_client 00:04:41.838 ************************************ 00:04:41.838 16:19:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.838 16:19:27 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.838 16:19:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.838 16:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.838 16:19:27 -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 ************************************ 00:04:41.838 START TEST json_config 00:04:41.838 ************************************ 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.838 16:19:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.838 16:19:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.838 16:19:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.838 16:19:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.838 16:19:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.838 16:19:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.838 16:19:27 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.838 16:19:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@47 -- # : 0 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.838 16:19:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.838 INFO: JSON configuration test init 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 16:19:27 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:41.838 16:19:27 json_config -- json_config/common.sh@9 -- # local app=target 00:04:41.838 16:19:27 json_config -- json_config/common.sh@10 -- # shift 00:04:41.838 16:19:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.838 16:19:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.838 16:19:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.838 16:19:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.838 16:19:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.838 16:19:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59231 00:04:41.838 Waiting for target to run... 00:04:41.838 16:19:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.838 16:19:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:41.838 16:19:27 json_config -- json_config/common.sh@25 -- # waitforlisten 59231 /var/tmp/spdk_tgt.sock 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@829 -- # '[' -z 59231 ']' 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.838 16:19:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 [2024-07-15 16:19:27.345056] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:41.838 [2024-07-15 16:19:27.345134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59231 ] 00:04:42.464 [2024-07-15 16:19:27.755828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.464 [2024-07-15 16:19:27.837042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.030 16:19:28 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.030 16:19:28 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:43.030 00:04:43.030 16:19:28 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.030 16:19:28 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:43.030 16:19:28 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:43.030 16:19:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.030 16:19:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.031 16:19:28 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:43.031 16:19:28 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:43.031 16:19:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.031 16:19:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.031 16:19:28 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.031 16:19:28 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:43.031 16:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:43.289 [2024-07-15 16:19:28.625595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:43.289 16:19:28 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:43.289 16:19:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:43.289 16:19:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.289 16:19:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.289 16:19:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:43.289 16:19:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:43.289 16:19:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:43.289 16:19:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:43.289 16:19:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:43.289 16:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:43.547 16:19:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.547 16:19:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:43.547 16:19:29 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:43.547 16:19:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.547 16:19:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.805 16:19:29 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:43.805 16:19:29 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:43.805 16:19:29 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:43.806 16:19:29 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.806 16:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.064 MallocForNvmf0 00:04:44.064 16:19:29 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.064 16:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.322 MallocForNvmf1 00:04:44.322 16:19:29 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.322 16:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.322 [2024-07-15 16:19:29.864086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.580 16:19:29 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:44.580 16:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:44.580 16:19:30 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.580 16:19:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.839 16:19:30 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.839 16:19:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.097 16:19:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.097 16:19:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.356 [2024-07-15 16:19:30.884683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:45.356 16:19:30 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:45.356 16:19:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.356 16:19:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.615 16:19:30 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:45.615 16:19:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.615 16:19:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.615 16:19:30 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:45.615 16:19:30 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.615 16:19:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.883 MallocBdevForConfigChangeCheck 00:04:45.884 16:19:31 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:45.884 16:19:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.884 16:19:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.884 16:19:31 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:45.884 16:19:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.142 INFO: shutting down applications... 00:04:46.142 16:19:31 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:46.142 16:19:31 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:46.142 16:19:31 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:46.142 16:19:31 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:46.142 16:19:31 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:46.401 Calling clear_iscsi_subsystem 00:04:46.401 Calling clear_nvmf_subsystem 00:04:46.401 Calling clear_nbd_subsystem 00:04:46.401 Calling clear_ublk_subsystem 00:04:46.401 Calling clear_vhost_blk_subsystem 00:04:46.401 Calling clear_vhost_scsi_subsystem 00:04:46.401 Calling clear_bdev_subsystem 00:04:46.748 16:19:31 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:46.748 16:19:31 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:46.748 16:19:31 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:46.748 16:19:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.748 16:19:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:46.748 16:19:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:47.007 16:19:32 json_config -- json_config/json_config.sh@345 -- # break 00:04:47.007 16:19:32 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:47.007 16:19:32 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:47.007 16:19:32 json_config -- json_config/common.sh@31 -- # local app=target 00:04:47.007 16:19:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.007 16:19:32 json_config -- json_config/common.sh@35 -- # [[ -n 59231 ]] 00:04:47.007 16:19:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59231 00:04:47.007 16:19:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.007 16:19:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.007 16:19:32 json_config -- json_config/common.sh@41 -- # kill -0 59231 00:04:47.007 16:19:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.573 16:19:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.573 16:19:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.573 16:19:32 json_config -- json_config/common.sh@41 -- # kill -0 59231 00:04:47.573 16:19:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.573 16:19:32 json_config -- json_config/common.sh@43 -- # break 00:04:47.573 16:19:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.573 SPDK target shutdown done 00:04:47.573 16:19:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.573 INFO: relaunching applications... 00:04:47.573 16:19:32 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:47.573 16:19:32 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.573 16:19:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:47.573 16:19:32 json_config -- json_config/common.sh@10 -- # shift 00:04:47.573 16:19:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.573 16:19:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.573 16:19:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.573 16:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.573 16:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.573 16:19:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59427 00:04:47.573 Waiting for target to run... 00:04:47.573 16:19:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.573 16:19:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.573 16:19:32 json_config -- json_config/common.sh@25 -- # waitforlisten 59427 /var/tmp/spdk_tgt.sock 00:04:47.573 16:19:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 59427 ']' 00:04:47.573 16:19:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.573 16:19:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.573 16:19:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.573 16:19:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.573 16:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.573 [2024-07-15 16:19:32.914415] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:47.573 [2024-07-15 16:19:32.914499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59427 ] 00:04:47.831 [2024-07-15 16:19:33.333472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.089 [2024-07-15 16:19:33.419468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.089 [2024-07-15 16:19:33.546059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:48.348 [2024-07-15 16:19:33.750334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.348 [2024-07-15 16:19:33.782427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.607 16:19:34 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.607 16:19:34 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:48.607 00:04:48.607 16:19:34 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.607 16:19:34 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:48.607 INFO: Checking if target configuration is the same... 00:04:48.607 16:19:34 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:48.607 16:19:34 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.607 16:19:34 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:48.607 16:19:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.607 + '[' 2 -ne 2 ']' 00:04:48.607 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.607 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.607 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.607 +++ basename /dev/fd/62 00:04:48.607 ++ mktemp /tmp/62.XXX 00:04:48.607 + tmp_file_1=/tmp/62.ZJX 00:04:48.607 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.607 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.607 + tmp_file_2=/tmp/spdk_tgt_config.json.gXa 00:04:48.607 + ret=0 00:04:48.607 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.174 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.174 + diff -u /tmp/62.ZJX /tmp/spdk_tgt_config.json.gXa 00:04:49.174 INFO: JSON config files are the same 00:04:49.174 + echo 'INFO: JSON config files are the same' 00:04:49.174 + rm /tmp/62.ZJX /tmp/spdk_tgt_config.json.gXa 00:04:49.174 + exit 0 00:04:49.174 16:19:34 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:49.174 INFO: changing configuration and checking if this can be detected... 00:04:49.174 16:19:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:49.174 16:19:34 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.174 16:19:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.432 16:19:34 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.432 16:19:34 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:49.432 16:19:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.432 + '[' 2 -ne 2 ']' 00:04:49.432 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:49.432 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:49.432 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:49.432 +++ basename /dev/fd/62 00:04:49.432 ++ mktemp /tmp/62.XXX 00:04:49.432 + tmp_file_1=/tmp/62.5IP 00:04:49.432 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.432 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:49.432 + tmp_file_2=/tmp/spdk_tgt_config.json.Dj9 00:04:49.432 + ret=0 00:04:49.432 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.691 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.950 + diff -u /tmp/62.5IP /tmp/spdk_tgt_config.json.Dj9 00:04:49.950 + ret=1 00:04:49.950 + echo '=== Start of file: /tmp/62.5IP ===' 00:04:49.950 + cat /tmp/62.5IP 00:04:49.950 + echo '=== End of file: /tmp/62.5IP ===' 00:04:49.950 + echo '' 00:04:49.950 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Dj9 ===' 00:04:49.950 + cat /tmp/spdk_tgt_config.json.Dj9 00:04:49.950 + echo '=== End of file: /tmp/spdk_tgt_config.json.Dj9 ===' 00:04:49.950 + echo '' 00:04:49.950 + rm /tmp/62.5IP /tmp/spdk_tgt_config.json.Dj9 00:04:49.950 + exit 1 00:04:49.950 INFO: configuration change detected. 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@317 -- # [[ -n 59427 ]] 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.950 16:19:35 json_config -- json_config/json_config.sh@323 -- # killprocess 59427 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@948 -- # '[' -z 59427 ']' 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@952 -- # kill -0 59427 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@953 -- # uname 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59427 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.950 killing process with pid 59427 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59427' 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@967 -- # kill 59427 00:04:49.950 16:19:35 json_config -- common/autotest_common.sh@972 -- # wait 59427 00:04:50.209 16:19:35 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.209 16:19:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:50.209 16:19:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:50.209 16:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.209 16:19:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:50.209 INFO: Success 00:04:50.209 16:19:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:50.209 00:04:50.209 real 0m8.469s 00:04:50.209 user 0m12.188s 00:04:50.209 sys 0m1.721s 00:04:50.209 16:19:35 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.209 16:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.209 ************************************ 00:04:50.209 END TEST json_config 00:04:50.209 ************************************ 00:04:50.209 16:19:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.209 16:19:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.209 16:19:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.209 16:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.209 16:19:35 -- common/autotest_common.sh@10 -- # set +x 00:04:50.209 ************************************ 00:04:50.209 START TEST json_config_extra_key 00:04:50.209 ************************************ 00:04:50.209 16:19:35 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.209 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.469 16:19:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.469 16:19:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.469 16:19:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.469 16:19:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.469 16:19:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.469 16:19:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.469 16:19:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.469 16:19:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:50.469 16:19:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.469 INFO: launching applications... 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.469 16:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59568 00:04:50.469 Waiting for target to run... 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59568 /var/tmp/spdk_tgt.sock 00:04:50.469 16:19:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.469 16:19:35 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59568 ']' 00:04:50.469 16:19:35 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.469 16:19:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.469 16:19:35 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.469 16:19:35 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.469 16:19:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.469 [2024-07-15 16:19:35.831303] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:50.469 [2024-07-15 16:19:35.831392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59568 ] 00:04:50.744 [2024-07-15 16:19:36.239498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.002 [2024-07-15 16:19:36.324060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.002 [2024-07-15 16:19:36.344466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:51.569 16:19:36 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.569 16:19:36 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:51.569 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.569 INFO: shutting down applications... 00:04:51.569 16:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.569 16:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59568 ]] 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59568 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59568 00:04:51.569 16:19:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.828 16:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.828 16:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.828 16:19:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59568 00:04:51.828 16:19:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.828 16:19:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.828 16:19:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.828 SPDK target shutdown done 00:04:51.828 16:19:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.828 Success 00:04:51.828 16:19:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.828 00:04:51.828 real 0m1.649s 00:04:51.828 user 0m1.597s 00:04:51.828 sys 0m0.406s 00:04:51.828 16:19:37 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.828 16:19:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.828 ************************************ 00:04:51.828 END TEST json_config_extra_key 00:04:51.828 ************************************ 00:04:52.087 16:19:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.087 16:19:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.087 16:19:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.087 16:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.087 16:19:37 -- common/autotest_common.sh@10 -- # set +x 00:04:52.087 ************************************ 00:04:52.087 START TEST alias_rpc 00:04:52.087 ************************************ 00:04:52.087 16:19:37 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.087 * Looking for test storage... 00:04:52.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:52.087 16:19:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.087 16:19:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59632 00:04:52.087 16:19:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.087 16:19:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59632 00:04:52.087 16:19:37 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59632 ']' 00:04:52.087 16:19:37 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.087 16:19:37 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.087 16:19:37 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.087 16:19:37 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.087 16:19:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.087 [2024-07-15 16:19:37.541853] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:52.087 [2024-07-15 16:19:37.541938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59632 ] 00:04:52.345 [2024-07-15 16:19:37.673181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.345 [2024-07-15 16:19:37.780902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.345 [2024-07-15 16:19:37.835188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:53.280 16:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.280 16:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59632 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59632 ']' 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59632 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59632 00:04:53.280 killing process with pid 59632 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59632' 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@967 -- # kill 59632 00:04:53.280 16:19:38 alias_rpc -- common/autotest_common.sh@972 -- # wait 59632 00:04:53.848 00:04:53.848 real 0m1.795s 00:04:53.848 user 0m2.030s 00:04:53.848 sys 0m0.428s 00:04:53.848 16:19:39 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.848 16:19:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 ************************************ 00:04:53.848 END TEST alias_rpc 00:04:53.848 ************************************ 00:04:53.848 16:19:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.848 16:19:39 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:53.848 16:19:39 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.848 16:19:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.848 16:19:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.848 16:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 ************************************ 00:04:53.848 START TEST spdkcli_tcp 00:04:53.848 ************************************ 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.848 * Looking for test storage... 00:04:53.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59708 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59708 00:04:53.848 16:19:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59708 ']' 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.848 16:19:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.848 [2024-07-15 16:19:39.395022] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:53.848 [2024-07-15 16:19:39.395131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59708 ] 00:04:54.107 [2024-07-15 16:19:39.525349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.107 [2024-07-15 16:19:39.639182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.107 [2024-07-15 16:19:39.639191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.391 [2024-07-15 16:19:39.693935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.967 16:19:40 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.967 16:19:40 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:54.967 16:19:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59725 00:04:54.967 16:19:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.967 16:19:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.226 [ 00:04:55.226 "bdev_malloc_delete", 00:04:55.226 "bdev_malloc_create", 00:04:55.226 "bdev_null_resize", 00:04:55.226 "bdev_null_delete", 00:04:55.226 "bdev_null_create", 00:04:55.226 "bdev_nvme_cuse_unregister", 00:04:55.226 "bdev_nvme_cuse_register", 00:04:55.226 "bdev_opal_new_user", 00:04:55.226 "bdev_opal_set_lock_state", 00:04:55.226 "bdev_opal_delete", 00:04:55.226 "bdev_opal_get_info", 00:04:55.226 "bdev_opal_create", 00:04:55.226 "bdev_nvme_opal_revert", 00:04:55.226 "bdev_nvme_opal_init", 00:04:55.226 "bdev_nvme_send_cmd", 00:04:55.226 "bdev_nvme_get_path_iostat", 00:04:55.226 "bdev_nvme_get_mdns_discovery_info", 00:04:55.226 "bdev_nvme_stop_mdns_discovery", 00:04:55.226 "bdev_nvme_start_mdns_discovery", 00:04:55.226 "bdev_nvme_set_multipath_policy", 00:04:55.226 "bdev_nvme_set_preferred_path", 00:04:55.226 "bdev_nvme_get_io_paths", 00:04:55.226 "bdev_nvme_remove_error_injection", 00:04:55.226 "bdev_nvme_add_error_injection", 00:04:55.226 "bdev_nvme_get_discovery_info", 00:04:55.226 "bdev_nvme_stop_discovery", 00:04:55.226 "bdev_nvme_start_discovery", 00:04:55.226 "bdev_nvme_get_controller_health_info", 00:04:55.226 "bdev_nvme_disable_controller", 00:04:55.226 "bdev_nvme_enable_controller", 00:04:55.226 "bdev_nvme_reset_controller", 00:04:55.226 "bdev_nvme_get_transport_statistics", 00:04:55.226 "bdev_nvme_apply_firmware", 00:04:55.226 "bdev_nvme_detach_controller", 00:04:55.226 "bdev_nvme_get_controllers", 00:04:55.226 "bdev_nvme_attach_controller", 00:04:55.226 "bdev_nvme_set_hotplug", 00:04:55.226 "bdev_nvme_set_options", 00:04:55.226 "bdev_passthru_delete", 00:04:55.226 "bdev_passthru_create", 00:04:55.226 "bdev_lvol_set_parent_bdev", 00:04:55.226 "bdev_lvol_set_parent", 00:04:55.226 "bdev_lvol_check_shallow_copy", 00:04:55.226 "bdev_lvol_start_shallow_copy", 00:04:55.226 "bdev_lvol_grow_lvstore", 00:04:55.226 "bdev_lvol_get_lvols", 00:04:55.226 "bdev_lvol_get_lvstores", 00:04:55.226 "bdev_lvol_delete", 00:04:55.226 "bdev_lvol_set_read_only", 00:04:55.226 "bdev_lvol_resize", 00:04:55.226 "bdev_lvol_decouple_parent", 00:04:55.226 "bdev_lvol_inflate", 00:04:55.226 "bdev_lvol_rename", 00:04:55.226 "bdev_lvol_clone_bdev", 00:04:55.226 "bdev_lvol_clone", 00:04:55.226 "bdev_lvol_snapshot", 00:04:55.226 "bdev_lvol_create", 00:04:55.226 "bdev_lvol_delete_lvstore", 00:04:55.226 "bdev_lvol_rename_lvstore", 00:04:55.226 "bdev_lvol_create_lvstore", 00:04:55.226 "bdev_raid_set_options", 00:04:55.226 "bdev_raid_remove_base_bdev", 00:04:55.226 "bdev_raid_add_base_bdev", 00:04:55.226 "bdev_raid_delete", 00:04:55.226 "bdev_raid_create", 00:04:55.226 "bdev_raid_get_bdevs", 00:04:55.226 "bdev_error_inject_error", 00:04:55.226 "bdev_error_delete", 00:04:55.226 "bdev_error_create", 00:04:55.226 "bdev_split_delete", 00:04:55.226 "bdev_split_create", 00:04:55.226 "bdev_delay_delete", 00:04:55.226 "bdev_delay_create", 00:04:55.226 "bdev_delay_update_latency", 00:04:55.226 "bdev_zone_block_delete", 00:04:55.226 "bdev_zone_block_create", 00:04:55.226 "blobfs_create", 00:04:55.226 "blobfs_detect", 00:04:55.226 "blobfs_set_cache_size", 00:04:55.226 "bdev_aio_delete", 00:04:55.226 "bdev_aio_rescan", 00:04:55.226 "bdev_aio_create", 00:04:55.226 "bdev_ftl_set_property", 00:04:55.226 "bdev_ftl_get_properties", 00:04:55.226 "bdev_ftl_get_stats", 00:04:55.226 "bdev_ftl_unmap", 00:04:55.226 "bdev_ftl_unload", 00:04:55.226 "bdev_ftl_delete", 00:04:55.226 "bdev_ftl_load", 00:04:55.226 "bdev_ftl_create", 00:04:55.226 "bdev_virtio_attach_controller", 00:04:55.226 "bdev_virtio_scsi_get_devices", 00:04:55.226 "bdev_virtio_detach_controller", 00:04:55.226 "bdev_virtio_blk_set_hotplug", 00:04:55.226 "bdev_iscsi_delete", 00:04:55.226 "bdev_iscsi_create", 00:04:55.226 "bdev_iscsi_set_options", 00:04:55.226 "bdev_uring_delete", 00:04:55.226 "bdev_uring_rescan", 00:04:55.226 "bdev_uring_create", 00:04:55.226 "accel_error_inject_error", 00:04:55.226 "ioat_scan_accel_module", 00:04:55.226 "dsa_scan_accel_module", 00:04:55.226 "iaa_scan_accel_module", 00:04:55.226 "keyring_file_remove_key", 00:04:55.226 "keyring_file_add_key", 00:04:55.226 "keyring_linux_set_options", 00:04:55.226 "iscsi_get_histogram", 00:04:55.226 "iscsi_enable_histogram", 00:04:55.226 "iscsi_set_options", 00:04:55.226 "iscsi_get_auth_groups", 00:04:55.226 "iscsi_auth_group_remove_secret", 00:04:55.226 "iscsi_auth_group_add_secret", 00:04:55.226 "iscsi_delete_auth_group", 00:04:55.226 "iscsi_create_auth_group", 00:04:55.226 "iscsi_set_discovery_auth", 00:04:55.226 "iscsi_get_options", 00:04:55.226 "iscsi_target_node_request_logout", 00:04:55.226 "iscsi_target_node_set_redirect", 00:04:55.226 "iscsi_target_node_set_auth", 00:04:55.226 "iscsi_target_node_add_lun", 00:04:55.226 "iscsi_get_stats", 00:04:55.226 "iscsi_get_connections", 00:04:55.226 "iscsi_portal_group_set_auth", 00:04:55.226 "iscsi_start_portal_group", 00:04:55.226 "iscsi_delete_portal_group", 00:04:55.226 "iscsi_create_portal_group", 00:04:55.227 "iscsi_get_portal_groups", 00:04:55.227 "iscsi_delete_target_node", 00:04:55.227 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.227 "iscsi_target_node_add_pg_ig_maps", 00:04:55.227 "iscsi_create_target_node", 00:04:55.227 "iscsi_get_target_nodes", 00:04:55.227 "iscsi_delete_initiator_group", 00:04:55.227 "iscsi_initiator_group_remove_initiators", 00:04:55.227 "iscsi_initiator_group_add_initiators", 00:04:55.227 "iscsi_create_initiator_group", 00:04:55.227 "iscsi_get_initiator_groups", 00:04:55.227 "nvmf_set_crdt", 00:04:55.227 "nvmf_set_config", 00:04:55.227 "nvmf_set_max_subsystems", 00:04:55.227 "nvmf_stop_mdns_prr", 00:04:55.227 "nvmf_publish_mdns_prr", 00:04:55.227 "nvmf_subsystem_get_listeners", 00:04:55.227 "nvmf_subsystem_get_qpairs", 00:04:55.227 "nvmf_subsystem_get_controllers", 00:04:55.227 "nvmf_get_stats", 00:04:55.227 "nvmf_get_transports", 00:04:55.227 "nvmf_create_transport", 00:04:55.227 "nvmf_get_targets", 00:04:55.227 "nvmf_delete_target", 00:04:55.227 "nvmf_create_target", 00:04:55.227 "nvmf_subsystem_allow_any_host", 00:04:55.227 "nvmf_subsystem_remove_host", 00:04:55.227 "nvmf_subsystem_add_host", 00:04:55.227 "nvmf_ns_remove_host", 00:04:55.227 "nvmf_ns_add_host", 00:04:55.227 "nvmf_subsystem_remove_ns", 00:04:55.227 "nvmf_subsystem_add_ns", 00:04:55.227 "nvmf_subsystem_listener_set_ana_state", 00:04:55.227 "nvmf_discovery_get_referrals", 00:04:55.227 "nvmf_discovery_remove_referral", 00:04:55.227 "nvmf_discovery_add_referral", 00:04:55.227 "nvmf_subsystem_remove_listener", 00:04:55.227 "nvmf_subsystem_add_listener", 00:04:55.227 "nvmf_delete_subsystem", 00:04:55.227 "nvmf_create_subsystem", 00:04:55.227 "nvmf_get_subsystems", 00:04:55.227 "env_dpdk_get_mem_stats", 00:04:55.227 "nbd_get_disks", 00:04:55.227 "nbd_stop_disk", 00:04:55.227 "nbd_start_disk", 00:04:55.227 "ublk_recover_disk", 00:04:55.227 "ublk_get_disks", 00:04:55.227 "ublk_stop_disk", 00:04:55.227 "ublk_start_disk", 00:04:55.227 "ublk_destroy_target", 00:04:55.227 "ublk_create_target", 00:04:55.227 "virtio_blk_create_transport", 00:04:55.227 "virtio_blk_get_transports", 00:04:55.227 "vhost_controller_set_coalescing", 00:04:55.227 "vhost_get_controllers", 00:04:55.227 "vhost_delete_controller", 00:04:55.227 "vhost_create_blk_controller", 00:04:55.227 "vhost_scsi_controller_remove_target", 00:04:55.227 "vhost_scsi_controller_add_target", 00:04:55.227 "vhost_start_scsi_controller", 00:04:55.227 "vhost_create_scsi_controller", 00:04:55.227 "thread_set_cpumask", 00:04:55.227 "framework_get_governor", 00:04:55.227 "framework_get_scheduler", 00:04:55.227 "framework_set_scheduler", 00:04:55.227 "framework_get_reactors", 00:04:55.227 "thread_get_io_channels", 00:04:55.227 "thread_get_pollers", 00:04:55.227 "thread_get_stats", 00:04:55.227 "framework_monitor_context_switch", 00:04:55.227 "spdk_kill_instance", 00:04:55.227 "log_enable_timestamps", 00:04:55.227 "log_get_flags", 00:04:55.227 "log_clear_flag", 00:04:55.227 "log_set_flag", 00:04:55.227 "log_get_level", 00:04:55.227 "log_set_level", 00:04:55.227 "log_get_print_level", 00:04:55.227 "log_set_print_level", 00:04:55.227 "framework_enable_cpumask_locks", 00:04:55.227 "framework_disable_cpumask_locks", 00:04:55.227 "framework_wait_init", 00:04:55.227 "framework_start_init", 00:04:55.227 "scsi_get_devices", 00:04:55.227 "bdev_get_histogram", 00:04:55.227 "bdev_enable_histogram", 00:04:55.227 "bdev_set_qos_limit", 00:04:55.227 "bdev_set_qd_sampling_period", 00:04:55.227 "bdev_get_bdevs", 00:04:55.227 "bdev_reset_iostat", 00:04:55.227 "bdev_get_iostat", 00:04:55.227 "bdev_examine", 00:04:55.227 "bdev_wait_for_examine", 00:04:55.227 "bdev_set_options", 00:04:55.227 "notify_get_notifications", 00:04:55.227 "notify_get_types", 00:04:55.227 "accel_get_stats", 00:04:55.227 "accel_set_options", 00:04:55.227 "accel_set_driver", 00:04:55.227 "accel_crypto_key_destroy", 00:04:55.227 "accel_crypto_keys_get", 00:04:55.227 "accel_crypto_key_create", 00:04:55.227 "accel_assign_opc", 00:04:55.227 "accel_get_module_info", 00:04:55.227 "accel_get_opc_assignments", 00:04:55.227 "vmd_rescan", 00:04:55.227 "vmd_remove_device", 00:04:55.227 "vmd_enable", 00:04:55.227 "sock_get_default_impl", 00:04:55.227 "sock_set_default_impl", 00:04:55.227 "sock_impl_set_options", 00:04:55.227 "sock_impl_get_options", 00:04:55.227 "iobuf_get_stats", 00:04:55.227 "iobuf_set_options", 00:04:55.227 "framework_get_pci_devices", 00:04:55.227 "framework_get_config", 00:04:55.227 "framework_get_subsystems", 00:04:55.227 "trace_get_info", 00:04:55.227 "trace_get_tpoint_group_mask", 00:04:55.227 "trace_disable_tpoint_group", 00:04:55.227 "trace_enable_tpoint_group", 00:04:55.227 "trace_clear_tpoint_mask", 00:04:55.227 "trace_set_tpoint_mask", 00:04:55.227 "keyring_get_keys", 00:04:55.227 "spdk_get_version", 00:04:55.227 "rpc_get_methods" 00:04:55.227 ] 00:04:55.227 16:19:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.227 16:19:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.227 16:19:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59708 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59708 ']' 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59708 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59708 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.227 killing process with pid 59708 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59708' 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59708 00:04:55.227 16:19:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59708 00:04:55.486 ************************************ 00:04:55.486 END TEST spdkcli_tcp 00:04:55.486 ************************************ 00:04:55.486 00:04:55.486 real 0m1.775s 00:04:55.486 user 0m3.251s 00:04:55.486 sys 0m0.473s 00:04:55.486 16:19:41 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.486 16:19:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.744 16:19:41 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.745 16:19:41 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.745 16:19:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.745 16:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.745 16:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.745 ************************************ 00:04:55.745 START TEST dpdk_mem_utility 00:04:55.745 ************************************ 00:04:55.745 16:19:41 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.745 * Looking for test storage... 00:04:55.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:55.745 16:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.745 16:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59799 00:04:55.745 16:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.745 16:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59799 00:04:55.745 16:19:41 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59799 ']' 00:04:55.745 16:19:41 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.745 16:19:41 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.745 16:19:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.745 16:19:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.745 16:19:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.745 [2024-07-15 16:19:41.221615] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:55.745 [2024-07-15 16:19:41.221735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59799 ] 00:04:56.004 [2024-07-15 16:19:41.363390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.004 [2024-07-15 16:19:41.486202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.004 [2024-07-15 16:19:41.543785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:56.942 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.943 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:56.943 16:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.943 16:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.943 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.943 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.943 { 00:04:56.943 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.943 } 00:04:56.943 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.943 16:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.943 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:56.943 1 heaps totaling size 814.000000 MiB 00:04:56.943 size: 814.000000 MiB heap id: 0 00:04:56.943 end heaps---------- 00:04:56.943 8 mempools totaling size 598.116089 MiB 00:04:56.943 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.943 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.943 size: 84.521057 MiB name: bdev_io_59799 00:04:56.943 size: 51.011292 MiB name: evtpool_59799 00:04:56.943 size: 50.003479 MiB name: msgpool_59799 00:04:56.943 size: 21.763794 MiB name: PDU_Pool 00:04:56.943 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.943 size: 0.026123 MiB name: Session_Pool 00:04:56.943 end mempools------- 00:04:56.943 6 memzones totaling size 4.142822 MiB 00:04:56.943 size: 1.000366 MiB name: RG_ring_0_59799 00:04:56.943 size: 1.000366 MiB name: RG_ring_1_59799 00:04:56.943 size: 1.000366 MiB name: RG_ring_4_59799 00:04:56.943 size: 1.000366 MiB name: RG_ring_5_59799 00:04:56.943 size: 0.125366 MiB name: RG_ring_2_59799 00:04:56.943 size: 0.015991 MiB name: RG_ring_3_59799 00:04:56.943 end memzones------- 00:04:56.943 16:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.943 heap id: 0 total size: 814.000000 MiB number of busy elements: 297 number of free elements: 15 00:04:56.943 list of free elements. size: 12.472473 MiB 00:04:56.943 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:56.943 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:56.943 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:56.943 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:56.943 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:56.943 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:56.943 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:56.943 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:56.943 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:56.943 element at address: 0x20001aa00000 with size: 0.569702 MiB 00:04:56.943 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:56.943 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:56.943 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:56.943 element at address: 0x200027e00000 with size: 0.395935 MiB 00:04:56.943 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:56.943 list of standard malloc elements. size: 199.264954 MiB 00:04:56.943 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:56.943 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:56.943 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:56.943 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:56.943 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:56.943 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:56.943 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:56.943 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:56.943 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:56.943 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:56.943 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:56.944 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:56.944 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:56.945 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:56.945 list of memzone associated elements. size: 602.262573 MiB 00:04:56.945 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:56.945 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.945 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:56.945 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.945 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:56.945 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59799_0 00:04:56.945 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:56.945 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59799_0 00:04:56.945 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:56.945 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59799_0 00:04:56.945 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:56.945 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.945 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:56.945 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.945 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:56.945 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59799 00:04:56.945 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:56.945 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59799 00:04:56.945 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:56.945 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59799 00:04:56.945 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:56.945 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.945 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:56.945 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.945 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:56.945 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.945 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:56.945 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.945 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:56.945 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59799 00:04:56.945 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:56.945 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59799 00:04:56.945 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:56.945 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59799 00:04:56.945 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:56.945 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59799 00:04:56.945 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:56.945 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59799 00:04:56.945 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:56.945 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.945 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:56.945 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.945 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:56.945 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.945 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:56.945 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59799 00:04:56.945 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:56.945 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.945 element at address: 0x200027e65740 with size: 0.023743 MiB 00:04:56.945 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.945 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:56.945 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59799 00:04:56.945 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:04:56.945 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.945 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:56.945 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59799 00:04:56.945 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:56.945 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59799 00:04:56.945 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:04:56.945 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.945 16:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.945 16:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59799 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59799 ']' 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59799 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59799 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.945 killing process with pid 59799 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59799' 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59799 00:04:56.945 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59799 00:04:57.221 00:04:57.221 real 0m1.662s 00:04:57.221 user 0m1.751s 00:04:57.221 sys 0m0.450s 00:04:57.221 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.221 16:19:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.221 ************************************ 00:04:57.221 END TEST dpdk_mem_utility 00:04:57.221 ************************************ 00:04:57.492 16:19:42 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.492 16:19:42 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.492 16:19:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.492 16:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.492 16:19:42 -- common/autotest_common.sh@10 -- # set +x 00:04:57.492 ************************************ 00:04:57.492 START TEST event 00:04:57.492 ************************************ 00:04:57.492 16:19:42 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.492 * Looking for test storage... 00:04:57.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.492 16:19:42 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.492 16:19:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.492 16:19:42 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.492 16:19:42 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:57.492 16:19:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.492 16:19:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.492 ************************************ 00:04:57.492 START TEST event_perf 00:04:57.492 ************************************ 00:04:57.492 16:19:42 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.492 Running I/O for 1 seconds...[2024-07-15 16:19:42.898962] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:57.492 [2024-07-15 16:19:42.899071] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59871 ] 00:04:57.492 [2024-07-15 16:19:43.039326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.749 [2024-07-15 16:19:43.148693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.749 Running I/O for 1 seconds...[2024-07-15 16:19:43.148794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.749 [2024-07-15 16:19:43.148872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.749 [2024-07-15 16:19:43.148873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.683 00:04:58.683 lcore 0: 183351 00:04:58.683 lcore 1: 183349 00:04:58.683 lcore 2: 183348 00:04:58.683 lcore 3: 183349 00:04:58.941 done. 00:04:58.941 00:04:58.941 real 0m1.357s 00:04:58.941 user 0m4.146s 00:04:58.941 sys 0m0.064s 00:04:58.941 16:19:44 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.941 16:19:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.941 ************************************ 00:04:58.941 END TEST event_perf 00:04:58.941 ************************************ 00:04:58.941 16:19:44 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.941 16:19:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:58.941 16:19:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:58.941 16:19:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.941 16:19:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.941 ************************************ 00:04:58.941 START TEST event_reactor 00:04:58.941 ************************************ 00:04:58.941 16:19:44 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:58.941 [2024-07-15 16:19:44.299561] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:04:58.941 [2024-07-15 16:19:44.299640] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59909 ] 00:04:58.941 [2024-07-15 16:19:44.434665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.200 [2024-07-15 16:19:44.552637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.135 test_start 00:05:00.135 oneshot 00:05:00.135 tick 100 00:05:00.135 tick 100 00:05:00.135 tick 250 00:05:00.135 tick 100 00:05:00.135 tick 100 00:05:00.135 tick 100 00:05:00.135 tick 250 00:05:00.135 tick 500 00:05:00.135 tick 100 00:05:00.135 tick 100 00:05:00.135 tick 250 00:05:00.135 tick 100 00:05:00.135 tick 100 00:05:00.135 test_end 00:05:00.135 00:05:00.135 real 0m1.354s 00:05:00.135 user 0m1.195s 00:05:00.135 sys 0m0.052s 00:05:00.135 16:19:45 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.135 ************************************ 00:05:00.135 END TEST event_reactor 00:05:00.135 ************************************ 00:05:00.135 16:19:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.135 16:19:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:00.135 16:19:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.135 16:19:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:00.135 16:19:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.135 16:19:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.394 ************************************ 00:05:00.394 START TEST event_reactor_perf 00:05:00.394 ************************************ 00:05:00.394 16:19:45 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.394 [2024-07-15 16:19:45.706222] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:00.394 [2024-07-15 16:19:45.706337] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59939 ] 00:05:00.394 [2024-07-15 16:19:45.845145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.653 [2024-07-15 16:19:45.957045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.588 test_start 00:05:01.588 test_end 00:05:01.588 Performance: 368473 events per second 00:05:01.588 00:05:01.588 real 0m1.367s 00:05:01.588 user 0m1.204s 00:05:01.588 sys 0m0.055s 00:05:01.588 16:19:47 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.588 ************************************ 00:05:01.588 END TEST event_reactor_perf 00:05:01.588 16:19:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.588 ************************************ 00:05:01.588 16:19:47 event -- common/autotest_common.sh@1142 -- # return 0 00:05:01.588 16:19:47 event -- event/event.sh@49 -- # uname -s 00:05:01.588 16:19:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:01.588 16:19:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:01.588 16:19:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.588 16:19:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.588 16:19:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.588 ************************************ 00:05:01.588 START TEST event_scheduler 00:05:01.588 ************************************ 00:05:01.588 16:19:47 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:01.847 * Looking for test storage... 00:05:01.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:01.847 16:19:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:01.847 16:19:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60001 00:05:01.847 16:19:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:01.847 16:19:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.847 16:19:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60001 00:05:01.847 16:19:47 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60001 ']' 00:05:01.847 16:19:47 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.847 16:19:47 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.847 16:19:47 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.847 16:19:47 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.847 16:19:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.847 [2024-07-15 16:19:47.237521] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:01.847 [2024-07-15 16:19:47.237632] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 00:05:01.847 [2024-07-15 16:19:47.375166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.135 [2024-07-15 16:19:47.515695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.135 [2024-07-15 16:19:47.515825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.135 [2024-07-15 16:19:47.515953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.135 [2024-07-15 16:19:47.516227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.733 16:19:48 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.733 16:19:48 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:02.733 16:19:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:02.733 16:19:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.733 16:19:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.733 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.733 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.733 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.733 POWER: Cannot set governor of lcore 0 to performance 00:05:02.733 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.733 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.733 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.733 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.733 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:02.733 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:02.733 POWER: Unable to set Power Management Environment for lcore 0 00:05:02.733 [2024-07-15 16:19:48.205944] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:02.733 [2024-07-15 16:19:48.205968] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:02.733 [2024-07-15 16:19:48.205977] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:02.733 [2024-07-15 16:19:48.205988] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:02.733 [2024-07-15 16:19:48.205996] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:02.733 [2024-07-15 16:19:48.206004] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:02.733 16:19:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.733 16:19:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:02.733 16:19:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.733 16:19:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.733 [2024-07-15 16:19:48.273051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.991 [2024-07-15 16:19:48.309954] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:02.991 16:19:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.991 16:19:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:02.991 16:19:48 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.991 16:19:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.991 16:19:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.991 ************************************ 00:05:02.991 START TEST scheduler_create_thread 00:05:02.991 ************************************ 00:05:02.991 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:02.991 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:02.991 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 2 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 3 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 4 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 5 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 6 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 7 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 8 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 9 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 10 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.992 16:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.367 16:19:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.367 16:19:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:04.367 16:19:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:04.367 16:19:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.367 16:19:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.739 16:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.739 00:05:05.739 real 0m2.612s 00:05:05.739 user 0m0.018s 00:05:05.739 sys 0m0.006s 00:05:05.740 16:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.740 16:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.740 ************************************ 00:05:05.740 END TEST scheduler_create_thread 00:05:05.740 ************************************ 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:05.740 16:19:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.740 16:19:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60001 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60001 ']' 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60001 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60001 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:05.740 killing process with pid 60001 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60001' 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60001 00:05:05.740 16:19:50 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60001 00:05:06.002 [2024-07-15 16:19:51.414536] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:06.269 00:05:06.269 real 0m4.552s 00:05:06.269 user 0m8.466s 00:05:06.269 sys 0m0.345s 00:05:06.269 16:19:51 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.269 ************************************ 00:05:06.269 END TEST event_scheduler 00:05:06.269 16:19:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.269 ************************************ 00:05:06.269 16:19:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:06.269 16:19:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:06.269 16:19:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:06.269 16:19:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.269 16:19:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.269 16:19:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.269 ************************************ 00:05:06.269 START TEST app_repeat 00:05:06.269 ************************************ 00:05:06.269 16:19:51 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:06.269 16:19:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60100 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.270 Process app_repeat pid: 60100 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60100' 00:05:06.270 spdk_app_start Round 0 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:06.270 16:19:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60100 /var/tmp/spdk-nbd.sock 00:05:06.270 16:19:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60100 ']' 00:05:06.270 16:19:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.270 16:19:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.270 16:19:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.270 16:19:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.270 16:19:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.270 [2024-07-15 16:19:51.732153] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:06.270 [2024-07-15 16:19:51.732259] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60100 ] 00:05:06.528 [2024-07-15 16:19:51.875029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.528 [2024-07-15 16:19:52.021450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.528 [2024-07-15 16:19:52.021462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.528 [2024-07-15 16:19:52.074494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.460 16:19:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.460 16:19:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.461 16:19:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.461 Malloc0 00:05:07.461 16:19:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.717 Malloc1 00:05:07.717 16:19:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.717 16:19:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.975 /dev/nbd0 00:05:07.975 16:19:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.975 16:19:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.975 1+0 records in 00:05:07.975 1+0 records out 00:05:07.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210863 s, 19.4 MB/s 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.975 16:19:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:07.975 16:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.975 16:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.975 16:19:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.232 /dev/nbd1 00:05:08.232 16:19:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.232 16:19:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.232 1+0 records in 00:05:08.232 1+0 records out 00:05:08.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246136 s, 16.6 MB/s 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.232 16:19:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.232 16:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.232 16:19:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.232 16:19:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.232 16:19:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.232 16:19:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.489 16:19:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.489 { 00:05:08.489 "nbd_device": "/dev/nbd0", 00:05:08.489 "bdev_name": "Malloc0" 00:05:08.489 }, 00:05:08.489 { 00:05:08.489 "nbd_device": "/dev/nbd1", 00:05:08.489 "bdev_name": "Malloc1" 00:05:08.489 } 00:05:08.489 ]' 00:05:08.489 16:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.489 { 00:05:08.489 "nbd_device": "/dev/nbd0", 00:05:08.489 "bdev_name": "Malloc0" 00:05:08.489 }, 00:05:08.489 { 00:05:08.489 "nbd_device": "/dev/nbd1", 00:05:08.489 "bdev_name": "Malloc1" 00:05:08.489 } 00:05:08.489 ]' 00:05:08.489 16:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.489 /dev/nbd1' 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.489 /dev/nbd1' 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.489 256+0 records in 00:05:08.489 256+0 records out 00:05:08.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00841055 s, 125 MB/s 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.489 16:19:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.747 256+0 records in 00:05:08.747 256+0 records out 00:05:08.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278643 s, 37.6 MB/s 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.747 256+0 records in 00:05:08.747 256+0 records out 00:05:08.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219737 s, 47.7 MB/s 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.747 16:19:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.748 16:19:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.005 16:19:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.263 16:19:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.521 16:19:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.521 16:19:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.777 16:19:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.035 [2024-07-15 16:19:55.394272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.035 [2024-07-15 16:19:55.499007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.035 [2024-07-15 16:19:55.499020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.035 [2024-07-15 16:19:55.551224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.035 [2024-07-15 16:19:55.551308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.035 [2024-07-15 16:19:55.551322] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.314 16:19:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.314 spdk_app_start Round 1 00:05:13.314 16:19:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:13.314 16:19:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60100 /var/tmp/spdk-nbd.sock 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60100 ']' 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.314 16:19:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:13.314 16:19:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.314 Malloc0 00:05:13.314 16:19:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.572 Malloc1 00:05:13.572 16:19:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.572 16:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.573 16:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.573 16:19:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.831 /dev/nbd0 00:05:13.831 16:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.831 16:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.831 1+0 records in 00:05:13.831 1+0 records out 00:05:13.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296945 s, 13.8 MB/s 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.831 16:19:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.831 16:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.831 16:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.831 16:19:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.089 /dev/nbd1 00:05:14.089 16:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.089 16:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.089 1+0 records in 00:05:14.089 1+0 records out 00:05:14.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215403 s, 19.0 MB/s 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.089 16:19:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.089 16:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.089 16:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.089 16:19:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.089 16:19:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.089 16:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.347 { 00:05:14.347 "nbd_device": "/dev/nbd0", 00:05:14.347 "bdev_name": "Malloc0" 00:05:14.347 }, 00:05:14.347 { 00:05:14.347 "nbd_device": "/dev/nbd1", 00:05:14.347 "bdev_name": "Malloc1" 00:05:14.347 } 00:05:14.347 ]' 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.347 { 00:05:14.347 "nbd_device": "/dev/nbd0", 00:05:14.347 "bdev_name": "Malloc0" 00:05:14.347 }, 00:05:14.347 { 00:05:14.347 "nbd_device": "/dev/nbd1", 00:05:14.347 "bdev_name": "Malloc1" 00:05:14.347 } 00:05:14.347 ]' 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.347 /dev/nbd1' 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.347 /dev/nbd1' 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.347 16:19:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.605 256+0 records in 00:05:14.605 256+0 records out 00:05:14.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695137 s, 151 MB/s 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.605 256+0 records in 00:05:14.605 256+0 records out 00:05:14.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225073 s, 46.6 MB/s 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.605 256+0 records in 00:05:14.605 256+0 records out 00:05:14.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335654 s, 31.2 MB/s 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.605 16:19:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.864 16:20:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.123 16:20:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.381 16:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.382 16:20:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.382 16:20:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.382 16:20:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.382 16:20:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.382 16:20:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.714 16:20:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.972 [2024-07-15 16:20:01.323975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.972 [2024-07-15 16:20:01.437347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.972 [2024-07-15 16:20:01.437361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.972 [2024-07-15 16:20:01.492572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:15.972 [2024-07-15 16:20:01.492656] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.972 [2024-07-15 16:20:01.492670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.271 16:20:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.271 spdk_app_start Round 2 00:05:19.271 16:20:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:19.271 16:20:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60100 /var/tmp/spdk-nbd.sock 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60100 ']' 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.271 16:20:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:19.271 16:20:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.271 Malloc0 00:05:19.271 16:20:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.529 Malloc1 00:05:19.529 16:20:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.529 16:20:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.787 /dev/nbd0 00:05:19.787 16:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.787 16:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.787 1+0 records in 00:05:19.787 1+0 records out 00:05:19.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330024 s, 12.4 MB/s 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.787 16:20:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.787 16:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.788 16:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.788 16:20:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.045 /dev/nbd1 00:05:20.045 16:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.045 16:20:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.045 1+0 records in 00:05:20.045 1+0 records out 00:05:20.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357221 s, 11.5 MB/s 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.045 16:20:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:20.046 16:20:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.046 16:20:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:20.046 16:20:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:20.046 16:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.046 16:20:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.046 16:20:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.046 16:20:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.046 16:20:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.304 { 00:05:20.304 "nbd_device": "/dev/nbd0", 00:05:20.304 "bdev_name": "Malloc0" 00:05:20.304 }, 00:05:20.304 { 00:05:20.304 "nbd_device": "/dev/nbd1", 00:05:20.304 "bdev_name": "Malloc1" 00:05:20.304 } 00:05:20.304 ]' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.304 { 00:05:20.304 "nbd_device": "/dev/nbd0", 00:05:20.304 "bdev_name": "Malloc0" 00:05:20.304 }, 00:05:20.304 { 00:05:20.304 "nbd_device": "/dev/nbd1", 00:05:20.304 "bdev_name": "Malloc1" 00:05:20.304 } 00:05:20.304 ]' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.304 /dev/nbd1' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.304 /dev/nbd1' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.304 256+0 records in 00:05:20.304 256+0 records out 00:05:20.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103423 s, 101 MB/s 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.304 256+0 records in 00:05:20.304 256+0 records out 00:05:20.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214019 s, 49.0 MB/s 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.304 256+0 records in 00:05:20.304 256+0 records out 00:05:20.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299733 s, 35.0 MB/s 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.304 16:20:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.878 16:20:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.141 16:20:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.141 16:20:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.141 16:20:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.399 16:20:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.399 16:20:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.658 16:20:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.658 [2024-07-15 16:20:07.185030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.916 [2024-07-15 16:20:07.296883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.916 [2024-07-15 16:20:07.296886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.916 [2024-07-15 16:20:07.352627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.916 [2024-07-15 16:20:07.352712] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.916 [2024-07-15 16:20:07.352727] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.449 16:20:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60100 /var/tmp/spdk-nbd.sock 00:05:24.449 16:20:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60100 ']' 00:05:24.449 16:20:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.449 16:20:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.449 16:20:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.449 16:20:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.449 16:20:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:25.016 16:20:10 event.app_repeat -- event/event.sh@39 -- # killprocess 60100 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60100 ']' 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60100 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60100 00:05:25.016 killing process with pid 60100 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60100' 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60100 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60100 00:05:25.016 spdk_app_start is called in Round 0. 00:05:25.016 Shutdown signal received, stop current app iteration 00:05:25.016 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:05:25.016 spdk_app_start is called in Round 1. 00:05:25.016 Shutdown signal received, stop current app iteration 00:05:25.016 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:05:25.016 spdk_app_start is called in Round 2. 00:05:25.016 Shutdown signal received, stop current app iteration 00:05:25.016 Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 reinitialization... 00:05:25.016 spdk_app_start is called in Round 3. 00:05:25.016 Shutdown signal received, stop current app iteration 00:05:25.016 16:20:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:25.016 16:20:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:25.016 00:05:25.016 real 0m18.813s 00:05:25.016 user 0m41.905s 00:05:25.016 sys 0m2.820s 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.016 ************************************ 00:05:25.016 END TEST app_repeat 00:05:25.016 ************************************ 00:05:25.016 16:20:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.016 16:20:10 event -- common/autotest_common.sh@1142 -- # return 0 00:05:25.016 16:20:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:25.016 16:20:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.016 16:20:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.016 16:20:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.016 16:20:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.017 ************************************ 00:05:25.017 START TEST cpu_locks 00:05:25.017 ************************************ 00:05:25.017 16:20:10 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.276 * Looking for test storage... 00:05:25.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.276 16:20:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:25.276 16:20:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:25.276 16:20:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:25.276 16:20:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:25.276 16:20:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.276 16:20:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.276 16:20:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.276 ************************************ 00:05:25.276 START TEST default_locks 00:05:25.276 ************************************ 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60533 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60533 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60533 ']' 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.276 16:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.276 [2024-07-15 16:20:10.721028] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:25.276 [2024-07-15 16:20:10.721137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60533 ] 00:05:25.535 [2024-07-15 16:20:10.857633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.535 [2024-07-15 16:20:10.970921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.535 [2024-07-15 16:20:11.023756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.471 16:20:11 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.472 16:20:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:26.472 16:20:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60533 00:05:26.472 16:20:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.472 16:20:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60533 00:05:26.731 16:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60533 00:05:26.731 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60533 ']' 00:05:26.731 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60533 00:05:26.731 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:26.731 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.731 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60533 00:05:26.991 killing process with pid 60533 00:05:26.991 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.991 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.991 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60533' 00:05:26.991 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60533 00:05:26.991 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60533 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60533 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60533 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60533 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60533 ']' 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.251 ERROR: process (pid: 60533) is no longer running 00:05:27.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60533) - No such process 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.251 00:05:27.251 real 0m2.016s 00:05:27.251 user 0m2.263s 00:05:27.251 sys 0m0.547s 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.251 16:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.251 ************************************ 00:05:27.251 END TEST default_locks 00:05:27.251 ************************************ 00:05:27.251 16:20:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.251 16:20:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:27.251 16:20:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.251 16:20:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.251 16:20:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.251 ************************************ 00:05:27.251 START TEST default_locks_via_rpc 00:05:27.251 ************************************ 00:05:27.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60585 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60585 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60585 ']' 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.251 16:20:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.251 [2024-07-15 16:20:12.796019] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:27.251 [2024-07-15 16:20:12.796121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60585 ] 00:05:27.509 [2024-07-15 16:20:12.939039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.767 [2024-07-15 16:20:13.063036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.767 [2024-07-15 16:20:13.118824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60585 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.332 16:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60585 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60585 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60585 ']' 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60585 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60585 00:05:28.897 killing process with pid 60585 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60585' 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60585 00:05:28.897 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60585 00:05:29.154 00:05:29.154 real 0m1.864s 00:05:29.154 user 0m2.046s 00:05:29.154 sys 0m0.514s 00:05:29.154 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.154 16:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.154 ************************************ 00:05:29.154 END TEST default_locks_via_rpc 00:05:29.154 ************************************ 00:05:29.154 16:20:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:29.154 16:20:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:29.154 16:20:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.154 16:20:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.154 16:20:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.154 ************************************ 00:05:29.154 START TEST non_locking_app_on_locked_coremask 00:05:29.154 ************************************ 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:29.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60636 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60636 /var/tmp/spdk.sock 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60636 ']' 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.154 16:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.154 [2024-07-15 16:20:14.698550] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:29.154 [2024-07-15 16:20:14.698653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60636 ] 00:05:29.411 [2024-07-15 16:20:14.838018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.411 [2024-07-15 16:20:14.950335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.668 [2024-07-15 16:20:15.005036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60652 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60652 /var/tmp/spdk2.sock 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60652 ']' 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.252 16:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.252 [2024-07-15 16:20:15.677127] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:30.252 [2024-07-15 16:20:15.677486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60652 ] 00:05:30.510 [2024-07-15 16:20:15.822093] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.510 [2024-07-15 16:20:15.822156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.510 [2024-07-15 16:20:16.049592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.767 [2024-07-15 16:20:16.156105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.333 16:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.333 16:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.333 16:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60636 00:05:31.333 16:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60636 00:05:31.333 16:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60636 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60636 ']' 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60636 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60636 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.899 killing process with pid 60636 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60636' 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60636 00:05:31.899 16:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60636 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60652 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60652 ']' 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60652 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60652 00:05:32.866 killing process with pid 60652 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60652' 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60652 00:05:32.866 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60652 00:05:33.124 00:05:33.124 real 0m3.961s 00:05:33.124 user 0m4.324s 00:05:33.124 sys 0m1.070s 00:05:33.124 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.124 ************************************ 00:05:33.124 END TEST non_locking_app_on_locked_coremask 00:05:33.124 ************************************ 00:05:33.124 16:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 16:20:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:33.124 16:20:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.124 16:20:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.124 16:20:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.124 16:20:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 ************************************ 00:05:33.124 START TEST locking_app_on_unlocked_coremask 00:05:33.124 ************************************ 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:33.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60719 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60719 /var/tmp/spdk.sock 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60719 ']' 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.124 16:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.394 [2024-07-15 16:20:18.712913] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:33.394 [2024-07-15 16:20:18.713020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60719 ] 00:05:33.394 [2024-07-15 16:20:18.850441] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.394 [2024-07-15 16:20:18.850491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.664 [2024-07-15 16:20:18.956221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.664 [2024-07-15 16:20:19.010007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:34.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.229 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.229 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:34.229 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60731 00:05:34.229 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60731 /var/tmp/spdk2.sock 00:05:34.229 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60731 ']' 00:05:34.230 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.230 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.230 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.230 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.230 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.230 16:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.230 [2024-07-15 16:20:19.714736] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:34.230 [2024-07-15 16:20:19.715052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60731 ] 00:05:34.488 [2024-07-15 16:20:19.862694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.745 [2024-07-15 16:20:20.072762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.745 [2024-07-15 16:20:20.188859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:35.313 16:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.313 16:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:35.313 16:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60731 00:05:35.313 16:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60731 00:05:35.313 16:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60719 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60719 ']' 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60719 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60719 00:05:36.247 killing process with pid 60719 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60719' 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60719 00:05:36.247 16:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60719 00:05:36.814 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60731 00:05:36.814 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60731 ']' 00:05:36.814 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60731 00:05:36.814 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:36.814 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.814 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60731 00:05:37.073 killing process with pid 60731 00:05:37.073 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.073 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.073 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60731' 00:05:37.073 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60731 00:05:37.073 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60731 00:05:37.333 ************************************ 00:05:37.333 END TEST locking_app_on_unlocked_coremask 00:05:37.333 ************************************ 00:05:37.333 00:05:37.333 real 0m4.114s 00:05:37.333 user 0m4.533s 00:05:37.333 sys 0m1.135s 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.333 16:20:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.333 16:20:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:37.333 16:20:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.333 16:20:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.333 16:20:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.333 ************************************ 00:05:37.333 START TEST locking_app_on_locked_coremask 00:05:37.333 ************************************ 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:37.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60801 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60801 /var/tmp/spdk.sock 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60801 ']' 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.333 16:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.333 [2024-07-15 16:20:22.871198] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:37.333 [2024-07-15 16:20:22.871309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60801 ] 00:05:37.590 [2024-07-15 16:20:23.009979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.590 [2024-07-15 16:20:23.123094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.851 [2024-07-15 16:20:23.176973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60813 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60813 /var/tmp/spdk2.sock 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60813 /var/tmp/spdk2.sock 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60813 /var/tmp/spdk2.sock 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60813 ']' 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.417 16:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.417 [2024-07-15 16:20:23.876780] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:38.417 [2024-07-15 16:20:23.877109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:05:38.675 [2024-07-15 16:20:24.022173] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60801 has claimed it. 00:05:38.675 [2024-07-15 16:20:24.022258] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.241 ERROR: process (pid: 60813) is no longer running 00:05:39.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60813) - No such process 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60801 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60801 00:05:39.241 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.498 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60801 00:05:39.498 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60801 ']' 00:05:39.498 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60801 00:05:39.498 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:39.498 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.498 16:20:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60801 00:05:39.498 killing process with pid 60801 00:05:39.499 16:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.499 16:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.499 16:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60801' 00:05:39.499 16:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60801 00:05:39.499 16:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60801 00:05:40.065 00:05:40.065 real 0m2.583s 00:05:40.065 user 0m2.931s 00:05:40.065 sys 0m0.652s 00:05:40.065 16:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.065 16:20:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.065 ************************************ 00:05:40.065 END TEST locking_app_on_locked_coremask 00:05:40.065 ************************************ 00:05:40.065 16:20:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.065 16:20:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:40.065 16:20:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.065 16:20:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.065 16:20:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.065 ************************************ 00:05:40.065 START TEST locking_overlapped_coremask 00:05:40.065 ************************************ 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:40.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60858 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60858 /var/tmp/spdk.sock 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60858 ']' 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.065 16:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.065 [2024-07-15 16:20:25.523535] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:40.065 [2024-07-15 16:20:25.523613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60858 ] 00:05:40.323 [2024-07-15 16:20:25.653471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.323 [2024-07-15 16:20:25.768846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.323 [2024-07-15 16:20:25.769001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.323 [2024-07-15 16:20:25.769006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.323 [2024-07-15 16:20:25.823352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60876 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60876 /var/tmp/spdk2.sock 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60876 /var/tmp/spdk2.sock 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60876 /var/tmp/spdk2.sock 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60876 ']' 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.298 16:20:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.298 [2024-07-15 16:20:26.552284] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:41.298 [2024-07-15 16:20:26.552441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60876 ] 00:05:41.298 [2024-07-15 16:20:26.711221] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60858 has claimed it. 00:05:41.298 [2024-07-15 16:20:26.711304] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.864 ERROR: process (pid: 60876) is no longer running 00:05:41.864 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60876) - No such process 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60858 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60858 ']' 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60858 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60858 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60858' 00:05:41.864 killing process with pid 60858 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60858 00:05:41.864 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60858 00:05:42.430 00:05:42.430 real 0m2.252s 00:05:42.430 user 0m6.255s 00:05:42.430 sys 0m0.466s 00:05:42.430 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.430 ************************************ 00:05:42.430 END TEST locking_overlapped_coremask 00:05:42.430 ************************************ 00:05:42.430 16:20:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.430 16:20:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:42.430 16:20:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:42.430 16:20:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.430 16:20:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.430 16:20:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.430 ************************************ 00:05:42.430 START TEST locking_overlapped_coremask_via_rpc 00:05:42.430 ************************************ 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60922 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60922 /var/tmp/spdk.sock 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60922 ']' 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.431 16:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.431 [2024-07-15 16:20:27.797102] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:42.431 [2024-07-15 16:20:27.797191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:05:42.431 [2024-07-15 16:20:27.932872] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.431 [2024-07-15 16:20:27.932929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.688 [2024-07-15 16:20:28.049117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.688 [2024-07-15 16:20:28.049250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.688 [2024-07-15 16:20:28.049256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.688 [2024-07-15 16:20:28.103648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60940 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60940 /var/tmp/spdk2.sock 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60940 ']' 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.253 16:20:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.253 [2024-07-15 16:20:28.785705] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:43.253 [2024-07-15 16:20:28.786036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60940 ] 00:05:43.510 [2024-07-15 16:20:28.929416] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.510 [2024-07-15 16:20:28.929515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.768 [2024-07-15 16:20:29.163891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.768 [2024-07-15 16:20:29.166969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:43.768 [2024-07-15 16:20:29.166973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.768 [2024-07-15 16:20:29.275304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.332 [2024-07-15 16:20:29.778985] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60922 has claimed it. 00:05:44.332 request: 00:05:44.332 { 00:05:44.332 "method": "framework_enable_cpumask_locks", 00:05:44.332 "req_id": 1 00:05:44.332 } 00:05:44.332 Got JSON-RPC error response 00:05:44.332 response: 00:05:44.332 { 00:05:44.332 "code": -32603, 00:05:44.332 "message": "Failed to claim CPU core: 2" 00:05:44.332 } 00:05:44.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60922 /var/tmp/spdk.sock 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60922 ']' 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.332 16:20:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.612 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.612 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.612 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60940 /var/tmp/spdk2.sock 00:05:44.612 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60940 ']' 00:05:44.612 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.612 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.613 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.613 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.613 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.870 ************************************ 00:05:44.870 END TEST locking_overlapped_coremask_via_rpc 00:05:44.870 ************************************ 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.870 00:05:44.870 real 0m2.543s 00:05:44.870 user 0m1.282s 00:05:44.870 sys 0m0.186s 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.870 16:20:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.870 16:20:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:44.870 16:20:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60922 ]] 00:05:44.870 16:20:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60922 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60922 ']' 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60922 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60922 00:05:44.870 killing process with pid 60922 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60922' 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60922 00:05:44.870 16:20:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60922 00:05:45.434 16:20:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60940 ]] 00:05:45.434 16:20:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60940 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60940 ']' 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60940 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60940 00:05:45.434 killing process with pid 60940 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60940' 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60940 00:05:45.434 16:20:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60940 00:05:45.692 16:20:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:45.692 16:20:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:45.692 16:20:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60922 ]] 00:05:45.692 16:20:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60922 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60922 ']' 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60922 00:05:45.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60922) - No such process 00:05:45.692 Process with pid 60922 is not found 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60922 is not found' 00:05:45.692 Process with pid 60940 is not found 00:05:45.692 16:20:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60940 ]] 00:05:45.692 16:20:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60940 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60940 ']' 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60940 00:05:45.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60940) - No such process 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60940 is not found' 00:05:45.692 16:20:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:45.692 00:05:45.692 real 0m20.620s 00:05:45.692 user 0m35.647s 00:05:45.692 sys 0m5.361s 00:05:45.692 ************************************ 00:05:45.692 END TEST cpu_locks 00:05:45.692 ************************************ 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.692 16:20:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.692 16:20:31 event -- common/autotest_common.sh@1142 -- # return 0 00:05:45.692 ************************************ 00:05:45.692 END TEST event 00:05:45.692 ************************************ 00:05:45.692 00:05:45.692 real 0m48.431s 00:05:45.692 user 1m32.679s 00:05:45.692 sys 0m8.931s 00:05:45.692 16:20:31 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.692 16:20:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 16:20:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.950 16:20:31 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:45.950 16:20:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.950 16:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.950 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 ************************************ 00:05:45.950 START TEST thread 00:05:45.950 ************************************ 00:05:45.950 16:20:31 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:45.950 * Looking for test storage... 00:05:45.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:45.950 16:20:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.950 16:20:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:45.950 16:20:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.950 16:20:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.950 ************************************ 00:05:45.950 START TEST thread_poller_perf 00:05:45.950 ************************************ 00:05:45.950 16:20:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.950 [2024-07-15 16:20:31.367377] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:45.950 [2024-07-15 16:20:31.368074] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61062 ] 00:05:46.209 [2024-07-15 16:20:31.506506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.209 [2024-07-15 16:20:31.618057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.209 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:47.169 ====================================== 00:05:47.169 busy:2212347377 (cyc) 00:05:47.169 total_run_count: 317000 00:05:47.169 tsc_hz: 2200000000 (cyc) 00:05:47.169 ====================================== 00:05:47.169 poller_cost: 6979 (cyc), 3172 (nsec) 00:05:47.169 00:05:47.169 real 0m1.366s 00:05:47.169 user 0m1.203s 00:05:47.169 sys 0m0.055s 00:05:47.169 16:20:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.169 16:20:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.429 ************************************ 00:05:47.429 END TEST thread_poller_perf 00:05:47.429 ************************************ 00:05:47.429 16:20:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:47.429 16:20:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.429 16:20:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:47.429 16:20:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.429 16:20:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.429 ************************************ 00:05:47.429 START TEST thread_poller_perf 00:05:47.429 ************************************ 00:05:47.429 16:20:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.429 [2024-07-15 16:20:32.790285] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:47.429 [2024-07-15 16:20:32.790667] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61097 ] 00:05:47.429 [2024-07-15 16:20:32.932193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.689 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:47.689 [2024-07-15 16:20:33.057349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.623 ====================================== 00:05:48.623 busy:2202313358 (cyc) 00:05:48.623 total_run_count: 4105000 00:05:48.623 tsc_hz: 2200000000 (cyc) 00:05:48.623 ====================================== 00:05:48.623 poller_cost: 536 (cyc), 243 (nsec) 00:05:48.623 00:05:48.623 real 0m1.371s 00:05:48.623 user 0m1.203s 00:05:48.623 sys 0m0.060s 00:05:48.623 16:20:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.623 ************************************ 00:05:48.623 END TEST thread_poller_perf 00:05:48.623 ************************************ 00:05:48.623 16:20:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 16:20:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:48.882 16:20:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:48.882 00:05:48.882 real 0m2.917s 00:05:48.882 user 0m2.454s 00:05:48.882 sys 0m0.237s 00:05:48.882 ************************************ 00:05:48.882 END TEST thread 00:05:48.882 ************************************ 00:05:48.882 16:20:34 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.882 16:20:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 16:20:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.882 16:20:34 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:48.882 16:20:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.882 16:20:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.882 16:20:34 -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 ************************************ 00:05:48.882 START TEST accel 00:05:48.882 ************************************ 00:05:48.882 16:20:34 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:48.882 * Looking for test storage... 00:05:48.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:48.882 16:20:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:48.882 16:20:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:48.882 16:20:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.882 16:20:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61169 00:05:48.882 16:20:34 accel -- accel/accel.sh@63 -- # waitforlisten 61169 00:05:48.882 16:20:34 accel -- common/autotest_common.sh@829 -- # '[' -z 61169 ']' 00:05:48.882 16:20:34 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.882 16:20:34 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:48.882 16:20:34 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.882 16:20:34 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.882 16:20:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:48.882 16:20:34 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.882 16:20:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 16:20:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.882 16:20:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.882 16:20:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.882 16:20:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.882 16:20:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.882 16:20:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:48.882 16:20:34 accel -- accel/accel.sh@41 -- # jq -r . 00:05:48.882 [2024-07-15 16:20:34.365948] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:48.883 [2024-07-15 16:20:34.366037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:05:49.141 [2024-07-15 16:20:34.499293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.141 [2024-07-15 16:20:34.611959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.141 [2024-07-15 16:20:34.666980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@862 -- # return 0 00:05:50.076 16:20:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:50.076 16:20:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:50.076 16:20:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:50.076 16:20:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:50.076 16:20:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:50.076 16:20:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.076 16:20:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:50.076 16:20:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:50.076 16:20:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:50.076 16:20:35 accel -- accel/accel.sh@75 -- # killprocess 61169 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@948 -- # '[' -z 61169 ']' 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@952 -- # kill -0 61169 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@953 -- # uname 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61169 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.076 16:20:35 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.077 16:20:35 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61169' 00:05:50.077 killing process with pid 61169 00:05:50.077 16:20:35 accel -- common/autotest_common.sh@967 -- # kill 61169 00:05:50.077 16:20:35 accel -- common/autotest_common.sh@972 -- # wait 61169 00:05:50.335 16:20:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:50.335 16:20:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:50.335 16:20:35 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:50.335 16:20:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.335 16:20:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.335 16:20:35 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:50.335 16:20:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:50.335 16:20:35 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.335 16:20:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:50.593 16:20:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.593 16:20:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:50.593 16:20:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.593 16:20:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.593 16:20:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.593 ************************************ 00:05:50.593 START TEST accel_missing_filename 00:05:50.593 ************************************ 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.593 16:20:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:50.594 16:20:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:50.594 [2024-07-15 16:20:35.945550] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:50.594 [2024-07-15 16:20:35.945675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 00:05:50.594 [2024-07-15 16:20:36.083225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.852 [2024-07-15 16:20:36.223899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.852 [2024-07-15 16:20:36.278444] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.852 [2024-07-15 16:20:36.352182] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:51.111 A filename is required. 00:05:51.111 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:51.111 ************************************ 00:05:51.111 END TEST accel_missing_filename 00:05:51.111 ************************************ 00:05:51.111 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.111 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:51.111 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.112 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:51.112 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.112 00:05:51.112 real 0m0.523s 00:05:51.112 user 0m0.353s 00:05:51.112 sys 0m0.121s 00:05:51.112 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.112 16:20:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:51.112 16:20:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.112 16:20:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:51.112 16:20:36 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:51.112 16:20:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.112 16:20:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.112 ************************************ 00:05:51.112 START TEST accel_compress_verify 00:05:51.112 ************************************ 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.112 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:51.112 16:20:36 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:51.112 [2024-07-15 16:20:36.516241] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:51.112 [2024-07-15 16:20:36.516358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61249 ] 00:05:51.112 [2024-07-15 16:20:36.654794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.370 [2024-07-15 16:20:36.769271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.370 [2024-07-15 16:20:36.826076] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.370 [2024-07-15 16:20:36.906306] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:51.629 00:05:51.629 Compression does not support the verify option, aborting. 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:51.629 ************************************ 00:05:51.629 END TEST accel_compress_verify 00:05:51.629 ************************************ 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.629 00:05:51.629 real 0m0.498s 00:05:51.629 user 0m0.327s 00:05:51.629 sys 0m0.114s 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.629 16:20:36 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:51.629 16:20:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.629 16:20:37 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:51.629 16:20:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.629 16:20:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.629 16:20:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.629 ************************************ 00:05:51.629 START TEST accel_wrong_workload 00:05:51.629 ************************************ 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.629 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:51.629 16:20:37 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:51.629 Unsupported workload type: foobar 00:05:51.629 [2024-07-15 16:20:37.062518] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:51.629 accel_perf options: 00:05:51.629 [-h help message] 00:05:51.629 [-q queue depth per core] 00:05:51.629 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:51.629 [-T number of threads per core 00:05:51.629 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:51.629 [-t time in seconds] 00:05:51.629 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:51.629 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:51.629 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:51.629 [-l for compress/decompress workloads, name of uncompressed input file 00:05:51.630 [-S for crc32c workload, use this seed value (default 0) 00:05:51.630 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:51.630 [-f for fill workload, use this BYTE value (default 255) 00:05:51.630 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:51.630 [-y verify result if this switch is on] 00:05:51.630 [-a tasks to allocate per core (default: same value as -q)] 00:05:51.630 Can be used to spread operations across a wider range of memory. 00:05:51.630 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:51.630 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.630 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.630 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.630 00:05:51.630 real 0m0.029s 00:05:51.630 user 0m0.017s 00:05:51.630 sys 0m0.012s 00:05:51.630 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.630 ************************************ 00:05:51.630 END TEST accel_wrong_workload 00:05:51.630 ************************************ 00:05:51.630 16:20:37 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:51.630 16:20:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.630 16:20:37 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:51.630 16:20:37 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:51.630 16:20:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.630 16:20:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.630 ************************************ 00:05:51.630 START TEST accel_negative_buffers 00:05:51.630 ************************************ 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:51.630 16:20:37 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:51.630 -x option must be non-negative. 00:05:51.630 [2024-07-15 16:20:37.139548] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:51.630 accel_perf options: 00:05:51.630 [-h help message] 00:05:51.630 [-q queue depth per core] 00:05:51.630 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:51.630 [-T number of threads per core 00:05:51.630 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:51.630 [-t time in seconds] 00:05:51.630 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:51.630 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:51.630 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:51.630 [-l for compress/decompress workloads, name of uncompressed input file 00:05:51.630 [-S for crc32c workload, use this seed value (default 0) 00:05:51.630 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:51.630 [-f for fill workload, use this BYTE value (default 255) 00:05:51.630 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:51.630 [-y verify result if this switch is on] 00:05:51.630 [-a tasks to allocate per core (default: same value as -q)] 00:05:51.630 Can be used to spread operations across a wider range of memory. 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.630 ************************************ 00:05:51.630 END TEST accel_negative_buffers 00:05:51.630 ************************************ 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.630 00:05:51.630 real 0m0.032s 00:05:51.630 user 0m0.021s 00:05:51.630 sys 0m0.010s 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.630 16:20:37 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:51.946 16:20:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.947 16:20:37 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:51.947 16:20:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.947 16:20:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.947 16:20:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.947 ************************************ 00:05:51.947 START TEST accel_crc32c 00:05:51.947 ************************************ 00:05:51.947 16:20:37 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:51.947 16:20:37 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:51.947 [2024-07-15 16:20:37.214756] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:51.947 [2024-07-15 16:20:37.214845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61308 ] 00:05:51.947 [2024-07-15 16:20:37.354013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.947 [2024-07-15 16:20:37.470595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.206 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.207 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.207 16:20:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.207 16:20:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.207 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.207 16:20:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.142 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:53.400 16:20:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.400 00:05:53.400 real 0m1.506s 00:05:53.400 user 0m1.298s 00:05:53.400 sys 0m0.113s 00:05:53.400 16:20:38 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.400 16:20:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:53.400 ************************************ 00:05:53.400 END TEST accel_crc32c 00:05:53.400 ************************************ 00:05:53.400 16:20:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.400 16:20:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:53.400 16:20:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:53.400 16:20:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.400 16:20:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.400 ************************************ 00:05:53.401 START TEST accel_crc32c_C2 00:05:53.401 ************************************ 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.401 16:20:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:53.401 [2024-07-15 16:20:38.770364] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:53.401 [2024-07-15 16:20:38.770466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61342 ] 00:05:53.401 [2024-07-15 16:20:38.906829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.660 [2024-07-15 16:20:39.022663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.660 16:20:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.036 00:05:55.036 real 0m1.506s 00:05:55.036 user 0m1.298s 00:05:55.036 sys 0m0.110s 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.036 16:20:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:55.036 ************************************ 00:05:55.036 END TEST accel_crc32c_C2 00:05:55.036 ************************************ 00:05:55.036 16:20:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.036 16:20:40 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:55.036 16:20:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:55.036 16:20:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.036 16:20:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.036 ************************************ 00:05:55.036 START TEST accel_copy 00:05:55.036 ************************************ 00:05:55.036 16:20:40 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:55.036 16:20:40 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:55.036 [2024-07-15 16:20:40.324310] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:55.036 [2024-07-15 16:20:40.324397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61377 ] 00:05:55.036 [2024-07-15 16:20:40.462268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.036 [2024-07-15 16:20:40.568168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.295 16:20:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:56.671 16:20:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.671 00:05:56.671 real 0m1.505s 00:05:56.671 user 0m1.283s 00:05:56.671 sys 0m0.125s 00:05:56.671 16:20:41 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.671 16:20:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:56.671 ************************************ 00:05:56.671 END TEST accel_copy 00:05:56.671 ************************************ 00:05:56.671 16:20:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.671 16:20:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.671 16:20:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:56.671 16:20:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.671 16:20:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.671 ************************************ 00:05:56.671 START TEST accel_fill 00:05:56.671 ************************************ 00:05:56.671 16:20:41 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.671 16:20:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:56.672 16:20:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:56.672 [2024-07-15 16:20:41.880034] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:56.672 [2024-07-15 16:20:41.880142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61411 ] 00:05:56.672 [2024-07-15 16:20:42.023154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.672 [2024-07-15 16:20:42.147036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.672 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.931 16:20:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.866 ************************************ 00:05:57.866 END TEST accel_fill 00:05:57.866 ************************************ 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:57.866 16:20:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.866 00:05:57.866 real 0m1.527s 00:05:57.866 user 0m0.014s 00:05:57.866 sys 0m0.004s 00:05:57.866 16:20:43 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.866 16:20:43 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:58.125 16:20:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.125 16:20:43 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:58.125 16:20:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:58.125 16:20:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.125 16:20:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.125 ************************************ 00:05:58.125 START TEST accel_copy_crc32c 00:05:58.125 ************************************ 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:58.125 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:58.125 [2024-07-15 16:20:43.452442] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:58.125 [2024-07-15 16:20:43.452559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61446 ] 00:05:58.125 [2024-07-15 16:20:43.591043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.384 [2024-07-15 16:20:43.705594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:58.384 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.385 16:20:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.781 ************************************ 00:05:59.781 END TEST accel_copy_crc32c 00:05:59.781 00:05:59.781 real 0m1.505s 00:05:59.781 user 0m0.014s 00:05:59.781 sys 0m0.002s 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.781 16:20:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:59.781 ************************************ 00:05:59.781 16:20:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.781 16:20:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:59.781 16:20:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:59.781 16:20:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.781 16:20:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.781 ************************************ 00:05:59.781 START TEST accel_copy_crc32c_C2 00:05:59.781 ************************************ 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.781 16:20:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:59.781 [2024-07-15 16:20:45.003327] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:05:59.781 [2024-07-15 16:20:45.003429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:05:59.781 [2024-07-15 16:20:45.142547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.781 [2024-07-15 16:20:45.255292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:59.781 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.782 16:20:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:01.159 ************************************ 00:06:01.159 END TEST accel_copy_crc32c_C2 00:06:01.159 ************************************ 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.159 00:06:01.159 real 0m1.510s 00:06:01.159 user 0m1.299s 00:06:01.159 sys 0m0.112s 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.159 16:20:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 16:20:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.159 16:20:46 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:01.159 16:20:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.159 16:20:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.159 16:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.159 ************************************ 00:06:01.159 START TEST accel_dualcast 00:06:01.159 ************************************ 00:06:01.159 16:20:46 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:01.159 16:20:46 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:01.159 [2024-07-15 16:20:46.559820] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:01.159 [2024-07-15 16:20:46.559928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61515 ] 00:06:01.159 [2024-07-15 16:20:46.695073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.418 [2024-07-15 16:20:46.807085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.418 16:20:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:02.793 16:20:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.793 00:06:02.793 real 0m1.504s 00:06:02.793 user 0m1.291s 00:06:02.793 sys 0m0.113s 00:06:02.793 16:20:48 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.793 16:20:48 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 ************************************ 00:06:02.793 END TEST accel_dualcast 00:06:02.793 ************************************ 00:06:02.793 16:20:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.793 16:20:48 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:02.793 16:20:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:02.793 16:20:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.793 16:20:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.793 ************************************ 00:06:02.793 START TEST accel_compare 00:06:02.793 ************************************ 00:06:02.794 16:20:48 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:02.794 16:20:48 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:02.794 [2024-07-15 16:20:48.107702] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:02.794 [2024-07-15 16:20:48.107836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:06:02.794 [2024-07-15 16:20:48.253149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.052 [2024-07-15 16:20:48.368578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.052 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.053 16:20:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.075 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:04.076 16:20:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.076 00:06:04.076 real 0m1.512s 00:06:04.076 user 0m1.302s 00:06:04.076 sys 0m0.115s 00:06:04.076 16:20:49 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.076 16:20:49 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:04.076 ************************************ 00:06:04.076 END TEST accel_compare 00:06:04.076 ************************************ 00:06:04.332 16:20:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.332 16:20:49 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:04.332 16:20:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:04.332 16:20:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.332 16:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.332 ************************************ 00:06:04.332 START TEST accel_xor 00:06:04.332 ************************************ 00:06:04.332 16:20:49 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.332 16:20:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.333 16:20:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.333 16:20:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:04.333 16:20:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:04.333 [2024-07-15 16:20:49.662937] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:04.333 [2024-07-15 16:20:49.663042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61584 ] 00:06:04.333 [2024-07-15 16:20:49.797761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.591 [2024-07-15 16:20:49.917885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.591 16:20:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.965 00:06:05.965 real 0m1.510s 00:06:05.965 user 0m1.302s 00:06:05.965 sys 0m0.118s 00:06:05.965 16:20:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.965 16:20:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:05.965 ************************************ 00:06:05.965 END TEST accel_xor 00:06:05.965 ************************************ 00:06:05.965 16:20:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.965 16:20:51 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:05.965 16:20:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:05.965 16:20:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.965 16:20:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.965 ************************************ 00:06:05.965 START TEST accel_xor 00:06:05.965 ************************************ 00:06:05.965 16:20:51 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:05.965 16:20:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:05.965 [2024-07-15 16:20:51.224214] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:05.965 [2024-07-15 16:20:51.224323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61624 ] 00:06:05.965 [2024-07-15 16:20:51.362742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.965 [2024-07-15 16:20:51.489248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.224 16:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:07.599 16:20:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.599 00:06:07.599 real 0m1.530s 00:06:07.599 user 0m1.312s 00:06:07.599 sys 0m0.125s 00:06:07.599 16:20:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.599 ************************************ 00:06:07.599 END TEST accel_xor 00:06:07.599 ************************************ 00:06:07.599 16:20:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:07.599 16:20:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.599 16:20:52 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:07.599 16:20:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:07.599 16:20:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.599 16:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.599 ************************************ 00:06:07.599 START TEST accel_dif_verify 00:06:07.599 ************************************ 00:06:07.599 16:20:52 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:07.599 16:20:52 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.599 [2024-07-15 16:20:52.799078] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:07.599 [2024-07-15 16:20:52.799168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61653 ] 00:06:07.599 [2024-07-15 16:20:52.929688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.599 [2024-07-15 16:20:53.044058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.599 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.600 16:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:08.994 16:20:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.994 00:06:08.994 real 0m1.499s 00:06:08.994 user 0m1.303s 00:06:08.994 sys 0m0.101s 00:06:08.994 16:20:54 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.994 16:20:54 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.994 ************************************ 00:06:08.994 END TEST accel_dif_verify 00:06:08.994 ************************************ 00:06:08.994 16:20:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.994 16:20:54 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:08.994 16:20:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:08.994 16:20:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.994 16:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.994 ************************************ 00:06:08.994 START TEST accel_dif_generate 00:06:08.994 ************************************ 00:06:08.994 16:20:54 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:08.994 16:20:54 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.995 16:20:54 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.995 16:20:54 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.995 16:20:54 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.995 16:20:54 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.995 16:20:54 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:08.995 16:20:54 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:08.995 [2024-07-15 16:20:54.352730] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:08.995 [2024-07-15 16:20:54.352826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61693 ] 00:06:08.995 [2024-07-15 16:20:54.483162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.253 [2024-07-15 16:20:54.622531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.253 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.254 16:20:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:10.630 16:20:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.630 00:06:10.630 real 0m1.533s 00:06:10.630 user 0m1.314s 00:06:10.630 sys 0m0.123s 00:06:10.630 16:20:55 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.630 16:20:55 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:10.630 ************************************ 00:06:10.630 END TEST accel_dif_generate 00:06:10.630 ************************************ 00:06:10.630 16:20:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.630 16:20:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:10.630 16:20:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:10.630 16:20:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.630 16:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.630 ************************************ 00:06:10.630 START TEST accel_dif_generate_copy 00:06:10.630 ************************************ 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:10.630 16:20:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:10.630 [2024-07-15 16:20:55.936253] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:10.630 [2024-07-15 16:20:55.936361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61722 ] 00:06:10.630 [2024-07-15 16:20:56.078952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.889 [2024-07-15 16:20:56.205187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.889 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.890 16:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:12.293 ************************************ 00:06:12.293 END TEST accel_dif_generate_copy 00:06:12.293 ************************************ 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.293 00:06:12.293 real 0m1.518s 00:06:12.293 user 0m1.297s 00:06:12.293 sys 0m0.125s 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.293 16:20:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.293 16:20:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.293 16:20:57 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:12.293 16:20:57 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.293 16:20:57 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:12.293 16:20:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.293 16:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.293 ************************************ 00:06:12.293 START TEST accel_comp 00:06:12.293 ************************************ 00:06:12.293 16:20:57 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:12.293 16:20:57 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:12.293 [2024-07-15 16:20:57.506881] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:12.293 [2024-07-15 16:20:57.508042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61762 ] 00:06:12.293 [2024-07-15 16:20:57.651853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.293 [2024-07-15 16:20:57.785184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.552 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.553 16:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:13.490 ************************************ 00:06:13.490 END TEST accel_comp 00:06:13.490 ************************************ 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:13.490 16:20:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.490 00:06:13.490 real 0m1.542s 00:06:13.490 user 0m1.313s 00:06:13.490 sys 0m0.131s 00:06:13.490 16:20:59 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.490 16:20:59 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:13.749 16:20:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.749 16:20:59 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.749 16:20:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:13.749 16:20:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.749 16:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.749 ************************************ 00:06:13.749 START TEST accel_decomp 00:06:13.749 ************************************ 00:06:13.749 16:20:59 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:13.749 16:20:59 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:13.749 [2024-07-15 16:20:59.097783] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:13.749 [2024-07-15 16:20:59.097909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61791 ] 00:06:13.749 [2024-07-15 16:20:59.237985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.008 [2024-07-15 16:20:59.357403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.008 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.009 16:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:15.386 ************************************ 00:06:15.386 END TEST accel_decomp 00:06:15.386 ************************************ 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.386 16:21:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.386 00:06:15.386 real 0m1.515s 00:06:15.386 user 0m1.302s 00:06:15.386 sys 0m0.120s 00:06:15.386 16:21:00 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.386 16:21:00 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:15.386 16:21:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.386 16:21:00 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.386 16:21:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:15.386 16:21:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.386 16:21:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.386 ************************************ 00:06:15.386 START TEST accel_decomp_full 00:06:15.386 ************************************ 00:06:15.386 16:21:00 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:15.386 16:21:00 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:15.386 [2024-07-15 16:21:00.657259] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:15.386 [2024-07-15 16:21:00.657344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:06:15.386 [2024-07-15 16:21:00.789345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.386 [2024-07-15 16:21:00.902816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.645 16:21:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.020 16:21:02 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.020 00:06:17.020 real 0m1.511s 00:06:17.020 user 0m1.299s 00:06:17.020 sys 0m0.118s 00:06:17.020 ************************************ 00:06:17.020 END TEST accel_decomp_full 00:06:17.020 ************************************ 00:06:17.020 16:21:02 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.020 16:21:02 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:17.020 16:21:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.020 16:21:02 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.020 16:21:02 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:17.020 16:21:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.020 16:21:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.020 ************************************ 00:06:17.020 START TEST accel_decomp_mcore 00:06:17.020 ************************************ 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:17.020 [2024-07-15 16:21:02.216583] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:17.020 [2024-07-15 16:21:02.216683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61862 ] 00:06:17.020 [2024-07-15 16:21:02.350741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.020 [2024-07-15 16:21:02.467119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.020 [2024-07-15 16:21:02.467171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.020 [2024-07-15 16:21:02.467225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.020 [2024-07-15 16:21:02.467230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.020 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.021 16:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.409 00:06:18.409 real 0m1.543s 00:06:18.409 user 0m4.780s 00:06:18.409 sys 0m0.139s 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.409 ************************************ 00:06:18.409 END TEST accel_decomp_mcore 00:06:18.409 ************************************ 00:06:18.409 16:21:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:18.409 16:21:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.409 16:21:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:18.409 16:21:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:18.409 16:21:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.409 16:21:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.409 ************************************ 00:06:18.409 START TEST accel_decomp_full_mcore 00:06:18.409 ************************************ 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:18.409 16:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:18.409 [2024-07-15 16:21:03.809936] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:18.409 [2024-07-15 16:21:03.810036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61905 ] 00:06:18.409 [2024-07-15 16:21:03.946161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.668 [2024-07-15 16:21:04.066356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.668 [2024-07-15 16:21:04.066499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.668 [2024-07-15 16:21:04.066555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.668 [2024-07-15 16:21:04.066983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.668 16:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.040 ************************************ 00:06:20.040 END TEST accel_decomp_full_mcore 00:06:20.040 ************************************ 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.040 00:06:20.040 real 0m1.560s 00:06:20.040 user 0m4.828s 00:06:20.040 sys 0m0.139s 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.040 16:21:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:20.040 16:21:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.040 16:21:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.040 16:21:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:20.040 16:21:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.040 16:21:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.040 ************************************ 00:06:20.040 START TEST accel_decomp_mthread 00:06:20.040 ************************************ 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:20.040 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:20.040 [2024-07-15 16:21:05.416355] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:20.040 [2024-07-15 16:21:05.416447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:06:20.040 [2024-07-15 16:21:05.551230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.297 [2024-07-15 16:21:05.666498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.297 16:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.670 00:06:21.670 real 0m1.508s 00:06:21.670 user 0m1.293s 00:06:21.670 sys 0m0.122s 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.670 ************************************ 00:06:21.670 END TEST accel_decomp_mthread 00:06:21.670 ************************************ 00:06:21.670 16:21:06 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:21.670 16:21:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.670 16:21:06 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:21.670 16:21:06 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:21.670 16:21:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.670 16:21:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.670 ************************************ 00:06:21.670 START TEST accel_decomp_full_mthread 00:06:21.670 ************************************ 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:21.670 16:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:21.670 [2024-07-15 16:21:06.975509] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:21.670 [2024-07-15 16:21:06.975635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61977 ] 00:06:21.670 [2024-07-15 16:21:07.118545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.929 [2024-07-15 16:21:07.235045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.929 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.930 16:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.301 00:06:23.301 real 0m1.537s 00:06:23.301 user 0m1.317s 00:06:23.301 sys 0m0.125s 00:06:23.301 ************************************ 00:06:23.301 END TEST accel_decomp_full_mthread 00:06:23.301 ************************************ 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.301 16:21:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:23.301 16:21:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.301 16:21:08 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:23.301 16:21:08 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:23.301 16:21:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:23.301 16:21:08 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.301 16:21:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.301 16:21:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.301 16:21:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.301 16:21:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.301 16:21:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.301 16:21:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.301 16:21:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.301 16:21:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:23.301 16:21:08 accel -- accel/accel.sh@41 -- # jq -r . 00:06:23.301 ************************************ 00:06:23.301 START TEST accel_dif_functional_tests 00:06:23.301 ************************************ 00:06:23.301 16:21:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:23.301 [2024-07-15 16:21:08.581615] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:23.302 [2024-07-15 16:21:08.581715] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62013 ] 00:06:23.302 [2024-07-15 16:21:08.716775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.302 [2024-07-15 16:21:08.829406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.302 [2024-07-15 16:21:08.829550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.302 [2024-07-15 16:21:08.829553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.560 [2024-07-15 16:21:08.884164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.560 00:06:23.560 00:06:23.560 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.560 http://cunit.sourceforge.net/ 00:06:23.560 00:06:23.560 00:06:23.561 Suite: accel_dif 00:06:23.561 Test: verify: DIF generated, GUARD check ...passed 00:06:23.561 Test: verify: DIF generated, APPTAG check ...passed 00:06:23.561 Test: verify: DIF generated, REFTAG check ...passed 00:06:23.561 Test: verify: DIF not generated, GUARD check ...[2024-07-15 16:21:08.922482] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:23.561 passed 00:06:23.561 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 16:21:08.922842] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:23.561 passed 00:06:23.561 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 16:21:08.923062] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed 00:06:23.561 Test: verify: APPTAG correct, APPTAG check ...5a 00:06:23.561 passed 00:06:23.561 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:23.561 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:23.561 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-15 16:21:08.923562] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:23.561 passed 00:06:23.561 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:23.561 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 16:21:08.924071] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:23.561 passed 00:06:23.561 Test: verify copy: DIF generated, GUARD check ...passed 00:06:23.561 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:23.561 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:23.561 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 16:21:08.924873] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:23.561 passed 00:06:23.561 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 16:21:08.925084] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:23.561 passed 00:06:23.561 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 16:21:08.925317] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:23.561 passed 00:06:23.561 Test: generate copy: DIF generated, GUARD check ...passed 00:06:23.561 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:23.561 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:23.561 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:23.561 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:23.561 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:23.561 Test: generate copy: iovecs-len validate ...[2024-07-15 16:21:08.925686] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:23.561 passed 00:06:23.561 Test: generate copy: buffer alignment validate ...passed 00:06:23.561 00:06:23.561 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.561 suites 1 1 n/a 0 0 00:06:23.561 tests 26 26 26 0 0 00:06:23.561 asserts 115 115 115 0 n/a 00:06:23.561 00:06:23.561 Elapsed time = 0.008 seconds 00:06:23.820 ************************************ 00:06:23.820 END TEST accel_dif_functional_tests 00:06:23.820 ************************************ 00:06:23.820 00:06:23.820 real 0m0.614s 00:06:23.820 user 0m0.821s 00:06:23.820 sys 0m0.153s 00:06:23.820 16:21:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.820 16:21:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:23.820 16:21:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.820 ************************************ 00:06:23.820 END TEST accel 00:06:23.820 ************************************ 00:06:23.820 00:06:23.820 real 0m34.957s 00:06:23.820 user 0m36.849s 00:06:23.820 sys 0m3.961s 00:06:23.820 16:21:09 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.820 16:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.820 16:21:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.820 16:21:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:23.820 16:21:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.820 16:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.820 16:21:09 -- common/autotest_common.sh@10 -- # set +x 00:06:23.820 ************************************ 00:06:23.820 START TEST accel_rpc 00:06:23.820 ************************************ 00:06:23.820 16:21:09 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:23.820 * Looking for test storage... 00:06:23.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:23.820 16:21:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.820 16:21:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62077 00:06:23.820 16:21:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62077 00:06:23.820 16:21:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:23.820 16:21:09 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62077 ']' 00:06:23.820 16:21:09 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.820 16:21:09 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.820 16:21:09 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.820 16:21:09 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.820 16:21:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.078 [2024-07-15 16:21:09.370919] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:24.078 [2024-07-15 16:21:09.371011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62077 ] 00:06:24.078 [2024-07-15 16:21:09.505314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.078 [2024-07-15 16:21:09.620974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.013 16:21:10 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.013 16:21:10 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.013 16:21:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:25.013 16:21:10 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:25.013 16:21:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:25.013 16:21:10 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:25.013 16:21:10 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:25.013 16:21:10 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.013 16:21:10 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.013 16:21:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.013 ************************************ 00:06:25.013 START TEST accel_assign_opcode 00:06:25.013 ************************************ 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.013 [2024-07-15 16:21:10.329802] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.013 [2024-07-15 16:21:10.337790] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.013 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.013 [2024-07-15 16:21:10.399809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.272 software 00:06:25.272 ************************************ 00:06:25.272 END TEST accel_assign_opcode 00:06:25.272 ************************************ 00:06:25.272 00:06:25.272 real 0m0.291s 00:06:25.272 user 0m0.051s 00:06:25.272 sys 0m0.007s 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.272 16:21:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:25.272 16:21:10 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62077 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62077 ']' 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62077 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62077 00:06:25.272 killing process with pid 62077 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62077' 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@967 -- # kill 62077 00:06:25.272 16:21:10 accel_rpc -- common/autotest_common.sh@972 -- # wait 62077 00:06:25.531 00:06:25.531 real 0m1.834s 00:06:25.531 user 0m1.900s 00:06:25.531 sys 0m0.417s 00:06:25.531 ************************************ 00:06:25.531 END TEST accel_rpc 00:06:25.531 ************************************ 00:06:25.531 16:21:11 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.531 16:21:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.790 16:21:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.790 16:21:11 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.790 16:21:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.790 16:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.790 16:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:25.790 ************************************ 00:06:25.790 START TEST app_cmdline 00:06:25.790 ************************************ 00:06:25.790 16:21:11 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:25.790 * Looking for test storage... 00:06:25.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:25.790 16:21:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:25.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.790 16:21:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62172 00:06:25.790 16:21:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:25.790 16:21:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62172 00:06:25.790 16:21:11 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62172 ']' 00:06:25.790 16:21:11 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.790 16:21:11 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.790 16:21:11 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.790 16:21:11 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.790 16:21:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.790 [2024-07-15 16:21:11.281213] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:25.790 [2024-07-15 16:21:11.281297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62172 ] 00:06:26.049 [2024-07-15 16:21:11.412014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.049 [2024-07-15 16:21:11.537215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.049 [2024-07-15 16:21:11.593063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:26.985 { 00:06:26.985 "version": "SPDK v24.09-pre git sha1 bdeef1ed3", 00:06:26.985 "fields": { 00:06:26.985 "major": 24, 00:06:26.985 "minor": 9, 00:06:26.985 "patch": 0, 00:06:26.985 "suffix": "-pre", 00:06:26.985 "commit": "bdeef1ed3" 00:06:26.985 } 00:06:26.985 } 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:26.985 16:21:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:26.985 16:21:12 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.245 request: 00:06:27.245 { 00:06:27.245 "method": "env_dpdk_get_mem_stats", 00:06:27.245 "req_id": 1 00:06:27.245 } 00:06:27.245 Got JSON-RPC error response 00:06:27.245 response: 00:06:27.245 { 00:06:27.245 "code": -32601, 00:06:27.245 "message": "Method not found" 00:06:27.245 } 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.245 16:21:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62172 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62172 ']' 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62172 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.245 16:21:12 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62172 00:06:27.504 killing process with pid 62172 00:06:27.504 16:21:12 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.504 16:21:12 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.504 16:21:12 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62172' 00:06:27.504 16:21:12 app_cmdline -- common/autotest_common.sh@967 -- # kill 62172 00:06:27.504 16:21:12 app_cmdline -- common/autotest_common.sh@972 -- # wait 62172 00:06:27.763 ************************************ 00:06:27.763 END TEST app_cmdline 00:06:27.763 ************************************ 00:06:27.763 00:06:27.763 real 0m2.086s 00:06:27.763 user 0m2.611s 00:06:27.763 sys 0m0.454s 00:06:27.763 16:21:13 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.763 16:21:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.763 16:21:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.763 16:21:13 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:27.763 16:21:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.763 16:21:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.763 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:06:27.763 ************************************ 00:06:27.763 START TEST version 00:06:27.763 ************************************ 00:06:27.763 16:21:13 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.022 * Looking for test storage... 00:06:28.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.022 16:21:13 version -- app/version.sh@17 -- # get_header_version major 00:06:28.022 16:21:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # cut -f2 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.022 16:21:13 version -- app/version.sh@17 -- # major=24 00:06:28.022 16:21:13 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.022 16:21:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # cut -f2 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.022 16:21:13 version -- app/version.sh@18 -- # minor=9 00:06:28.022 16:21:13 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.022 16:21:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # cut -f2 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.022 16:21:13 version -- app/version.sh@19 -- # patch=0 00:06:28.022 16:21:13 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.022 16:21:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # cut -f2 00:06:28.022 16:21:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.022 16:21:13 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.022 16:21:13 version -- app/version.sh@22 -- # version=24.9 00:06:28.022 16:21:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.022 16:21:13 version -- app/version.sh@28 -- # version=24.9rc0 00:06:28.022 16:21:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:28.022 16:21:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.022 16:21:13 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:28.022 16:21:13 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:28.022 00:06:28.022 real 0m0.136s 00:06:28.022 user 0m0.074s 00:06:28.022 sys 0m0.090s 00:06:28.022 ************************************ 00:06:28.022 END TEST version 00:06:28.022 ************************************ 00:06:28.022 16:21:13 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.022 16:21:13 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.022 16:21:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.022 16:21:13 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:28.022 16:21:13 -- spdk/autotest.sh@198 -- # uname -s 00:06:28.022 16:21:13 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:28.022 16:21:13 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:28.022 16:21:13 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:28.022 16:21:13 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:28.022 16:21:13 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:28.022 16:21:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.022 16:21:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.022 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:06:28.022 ************************************ 00:06:28.022 START TEST spdk_dd 00:06:28.022 ************************************ 00:06:28.022 16:21:13 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:28.022 * Looking for test storage... 00:06:28.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:28.022 16:21:13 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.022 16:21:13 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.022 16:21:13 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.022 16:21:13 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.022 16:21:13 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.022 16:21:13 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.022 16:21:13 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.022 16:21:13 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:28.022 16:21:13 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.022 16:21:13 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:28.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:28.541 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:28.541 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:28.541 16:21:13 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:28.541 16:21:13 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:28.541 16:21:13 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:28.541 16:21:13 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.541 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:28.542 * spdk_dd linked to liburing 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:28.542 16:21:13 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:28.542 16:21:13 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:28.543 16:21:13 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:28.543 16:21:14 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:28.543 16:21:14 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:28.543 16:21:14 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:28.543 16:21:14 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:28.543 16:21:14 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:28.543 16:21:14 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:28.543 16:21:14 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:28.543 16:21:14 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:28.543 16:21:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:28.543 16:21:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.543 16:21:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:28.543 ************************************ 00:06:28.543 START TEST spdk_dd_basic_rw 00:06:28.543 ************************************ 00:06:28.543 16:21:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:28.543 * Looking for test storage... 00:06:28.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:28.803 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.803 16:21:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.803 16:21:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.803 16:21:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.803 16:21:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.803 16:21:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.803 16:21:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:28.804 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.805 ************************************ 00:06:28.805 START TEST dd_bs_lt_native_bs 00:06:28.805 ************************************ 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.805 16:21:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:29.065 { 00:06:29.065 "subsystems": [ 00:06:29.065 { 00:06:29.065 "subsystem": "bdev", 00:06:29.065 "config": [ 00:06:29.065 { 00:06:29.065 "params": { 00:06:29.065 "trtype": "pcie", 00:06:29.065 "traddr": "0000:00:10.0", 00:06:29.065 "name": "Nvme0" 00:06:29.065 }, 00:06:29.065 "method": "bdev_nvme_attach_controller" 00:06:29.065 }, 00:06:29.065 { 00:06:29.065 "method": "bdev_wait_for_examine" 00:06:29.065 } 00:06:29.065 ] 00:06:29.065 } 00:06:29.065 ] 00:06:29.065 } 00:06:29.065 [2024-07-15 16:21:14.357761] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:29.065 [2024-07-15 16:21:14.357889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62492 ] 00:06:29.065 [2024-07-15 16:21:14.494232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.324 [2024-07-15 16:21:14.628831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.324 [2024-07-15 16:21:14.682988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.324 [2024-07-15 16:21:14.788241] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:29.324 [2024-07-15 16:21:14.788536] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.583 [2024-07-15 16:21:14.912643] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.583 00:06:29.583 real 0m0.707s 00:06:29.583 user 0m0.506s 00:06:29.583 sys 0m0.159s 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:29.583 ************************************ 00:06:29.583 END TEST dd_bs_lt_native_bs 00:06:29.583 ************************************ 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.583 ************************************ 00:06:29.583 START TEST dd_rw 00:06:29.583 ************************************ 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:29.583 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.150 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:30.150 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:30.150 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.150 16:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.150 [2024-07-15 16:21:15.698048] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:30.150 [2024-07-15 16:21:15.698153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62530 ] 00:06:30.150 { 00:06:30.150 "subsystems": [ 00:06:30.150 { 00:06:30.150 "subsystem": "bdev", 00:06:30.150 "config": [ 00:06:30.150 { 00:06:30.150 "params": { 00:06:30.150 "trtype": "pcie", 00:06:30.150 "traddr": "0000:00:10.0", 00:06:30.150 "name": "Nvme0" 00:06:30.150 }, 00:06:30.150 "method": "bdev_nvme_attach_controller" 00:06:30.150 }, 00:06:30.150 { 00:06:30.150 "method": "bdev_wait_for_examine" 00:06:30.150 } 00:06:30.150 ] 00:06:30.150 } 00:06:30.150 ] 00:06:30.150 } 00:06:30.409 [2024-07-15 16:21:15.835032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.409 [2024-07-15 16:21:15.950107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.667 [2024-07-15 16:21:16.005347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.926  Copying: 60/60 [kB] (average 19 MBps) 00:06:30.926 00:06:30.926 16:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:30.926 16:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:30.926 16:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.926 16:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.926 [2024-07-15 16:21:16.393159] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:30.926 [2024-07-15 16:21:16.393274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62538 ] 00:06:30.926 { 00:06:30.926 "subsystems": [ 00:06:30.926 { 00:06:30.926 "subsystem": "bdev", 00:06:30.926 "config": [ 00:06:30.926 { 00:06:30.926 "params": { 00:06:30.926 "trtype": "pcie", 00:06:30.926 "traddr": "0000:00:10.0", 00:06:30.926 "name": "Nvme0" 00:06:30.926 }, 00:06:30.926 "method": "bdev_nvme_attach_controller" 00:06:30.926 }, 00:06:30.926 { 00:06:30.926 "method": "bdev_wait_for_examine" 00:06:30.926 } 00:06:30.926 ] 00:06:30.926 } 00:06:30.926 ] 00:06:30.926 } 00:06:31.186 [2024-07-15 16:21:16.533244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.186 [2024-07-15 16:21:16.649269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.186 [2024-07-15 16:21:16.705424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.726  Copying: 60/60 [kB] (average 19 MBps) 00:06:31.726 00:06:31.726 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.726 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:31.726 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.726 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.726 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:31.727 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:31.727 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:31.727 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:31.727 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:31.727 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.727 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.727 [2024-07-15 16:21:17.093636] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:31.727 [2024-07-15 16:21:17.094046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62559 ] 00:06:31.727 { 00:06:31.727 "subsystems": [ 00:06:31.727 { 00:06:31.727 "subsystem": "bdev", 00:06:31.727 "config": [ 00:06:31.727 { 00:06:31.727 "params": { 00:06:31.727 "trtype": "pcie", 00:06:31.727 "traddr": "0000:00:10.0", 00:06:31.727 "name": "Nvme0" 00:06:31.727 }, 00:06:31.727 "method": "bdev_nvme_attach_controller" 00:06:31.727 }, 00:06:31.727 { 00:06:31.727 "method": "bdev_wait_for_examine" 00:06:31.727 } 00:06:31.727 ] 00:06:31.727 } 00:06:31.727 ] 00:06:31.727 } 00:06:31.727 [2024-07-15 16:21:17.233752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.986 [2024-07-15 16:21:17.345630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.986 [2024-07-15 16:21:17.398602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.245  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:32.245 00:06:32.245 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:32.245 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:32.245 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:32.245 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:32.245 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:32.245 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:32.245 16:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.812 16:21:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:32.812 16:21:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:32.812 16:21:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.812 16:21:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.071 { 00:06:33.071 "subsystems": [ 00:06:33.071 { 00:06:33.071 "subsystem": "bdev", 00:06:33.071 "config": [ 00:06:33.071 { 00:06:33.071 "params": { 00:06:33.071 "trtype": "pcie", 00:06:33.071 "traddr": "0000:00:10.0", 00:06:33.071 "name": "Nvme0" 00:06:33.071 }, 00:06:33.071 "method": "bdev_nvme_attach_controller" 00:06:33.071 }, 00:06:33.071 { 00:06:33.071 "method": "bdev_wait_for_examine" 00:06:33.071 } 00:06:33.071 ] 00:06:33.071 } 00:06:33.071 ] 00:06:33.071 } 00:06:33.071 [2024-07-15 16:21:18.369214] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:33.071 [2024-07-15 16:21:18.369331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62578 ] 00:06:33.071 [2024-07-15 16:21:18.510540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.330 [2024-07-15 16:21:18.623153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.330 [2024-07-15 16:21:18.676710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.589  Copying: 60/60 [kB] (average 58 MBps) 00:06:33.589 00:06:33.589 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:33.589 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:33.589 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.589 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.589 [2024-07-15 16:21:19.073565] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:33.589 [2024-07-15 16:21:19.073658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62597 ] 00:06:33.589 { 00:06:33.589 "subsystems": [ 00:06:33.589 { 00:06:33.589 "subsystem": "bdev", 00:06:33.589 "config": [ 00:06:33.589 { 00:06:33.589 "params": { 00:06:33.589 "trtype": "pcie", 00:06:33.589 "traddr": "0000:00:10.0", 00:06:33.589 "name": "Nvme0" 00:06:33.589 }, 00:06:33.589 "method": "bdev_nvme_attach_controller" 00:06:33.589 }, 00:06:33.589 { 00:06:33.589 "method": "bdev_wait_for_examine" 00:06:33.589 } 00:06:33.589 ] 00:06:33.589 } 00:06:33.589 ] 00:06:33.589 } 00:06:33.848 [2024-07-15 16:21:19.209900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.848 [2024-07-15 16:21:19.321115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.848 [2024-07-15 16:21:19.373745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.366  Copying: 60/60 [kB] (average 29 MBps) 00:06:34.366 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:34.366 16:21:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.366 [2024-07-15 16:21:19.764507] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:34.366 [2024-07-15 16:21:19.764631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62618 ] 00:06:34.366 { 00:06:34.366 "subsystems": [ 00:06:34.366 { 00:06:34.366 "subsystem": "bdev", 00:06:34.366 "config": [ 00:06:34.366 { 00:06:34.366 "params": { 00:06:34.366 "trtype": "pcie", 00:06:34.366 "traddr": "0000:00:10.0", 00:06:34.366 "name": "Nvme0" 00:06:34.366 }, 00:06:34.366 "method": "bdev_nvme_attach_controller" 00:06:34.366 }, 00:06:34.366 { 00:06:34.366 "method": "bdev_wait_for_examine" 00:06:34.366 } 00:06:34.366 ] 00:06:34.366 } 00:06:34.366 ] 00:06:34.366 } 00:06:34.366 [2024-07-15 16:21:19.903196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.625 [2024-07-15 16:21:20.014269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.625 [2024-07-15 16:21:20.067122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.883  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:34.883 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:34.883 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.474 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:35.474 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:35.474 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.474 16:21:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.474 [2024-07-15 16:21:20.984021] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:35.474 [2024-07-15 16:21:20.984386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62637 ] 00:06:35.474 { 00:06:35.474 "subsystems": [ 00:06:35.474 { 00:06:35.474 "subsystem": "bdev", 00:06:35.474 "config": [ 00:06:35.474 { 00:06:35.474 "params": { 00:06:35.474 "trtype": "pcie", 00:06:35.474 "traddr": "0000:00:10.0", 00:06:35.474 "name": "Nvme0" 00:06:35.474 }, 00:06:35.474 "method": "bdev_nvme_attach_controller" 00:06:35.474 }, 00:06:35.474 { 00:06:35.474 "method": "bdev_wait_for_examine" 00:06:35.474 } 00:06:35.474 ] 00:06:35.474 } 00:06:35.474 ] 00:06:35.474 } 00:06:35.734 [2024-07-15 16:21:21.116876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.734 [2024-07-15 16:21:21.228019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.734 [2024-07-15 16:21:21.281687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.253  Copying: 56/56 [kB] (average 54 MBps) 00:06:36.253 00:06:36.253 16:21:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:36.253 16:21:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:36.253 16:21:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.253 16:21:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.253 [2024-07-15 16:21:21.663356] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:36.253 [2024-07-15 16:21:21.663456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62645 ] 00:06:36.253 { 00:06:36.253 "subsystems": [ 00:06:36.253 { 00:06:36.253 "subsystem": "bdev", 00:06:36.253 "config": [ 00:06:36.253 { 00:06:36.253 "params": { 00:06:36.253 "trtype": "pcie", 00:06:36.253 "traddr": "0000:00:10.0", 00:06:36.253 "name": "Nvme0" 00:06:36.253 }, 00:06:36.253 "method": "bdev_nvme_attach_controller" 00:06:36.253 }, 00:06:36.253 { 00:06:36.253 "method": "bdev_wait_for_examine" 00:06:36.253 } 00:06:36.253 ] 00:06:36.253 } 00:06:36.253 ] 00:06:36.253 } 00:06:36.253 [2024-07-15 16:21:21.801022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.512 [2024-07-15 16:21:21.915391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.512 [2024-07-15 16:21:21.968321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.770  Copying: 56/56 [kB] (average 27 MBps) 00:06:36.770 00:06:36.770 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.771 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.029 [2024-07-15 16:21:22.356556] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:37.029 [2024-07-15 16:21:22.356680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62666 ] 00:06:37.029 { 00:06:37.029 "subsystems": [ 00:06:37.029 { 00:06:37.029 "subsystem": "bdev", 00:06:37.029 "config": [ 00:06:37.029 { 00:06:37.029 "params": { 00:06:37.029 "trtype": "pcie", 00:06:37.029 "traddr": "0000:00:10.0", 00:06:37.029 "name": "Nvme0" 00:06:37.029 }, 00:06:37.029 "method": "bdev_nvme_attach_controller" 00:06:37.029 }, 00:06:37.029 { 00:06:37.029 "method": "bdev_wait_for_examine" 00:06:37.029 } 00:06:37.029 ] 00:06:37.029 } 00:06:37.029 ] 00:06:37.029 } 00:06:37.029 [2024-07-15 16:21:22.497646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.287 [2024-07-15 16:21:22.609426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.287 [2024-07-15 16:21:22.663088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.545  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:37.545 00:06:37.545 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:37.545 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:37.545 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:37.545 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:37.545 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:37.545 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:37.545 16:21:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.131 16:21:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:38.131 16:21:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:38.131 16:21:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.131 16:21:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.131 [2024-07-15 16:21:23.582828] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:38.131 [2024-07-15 16:21:23.583226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62685 ] 00:06:38.131 { 00:06:38.131 "subsystems": [ 00:06:38.131 { 00:06:38.131 "subsystem": "bdev", 00:06:38.131 "config": [ 00:06:38.131 { 00:06:38.131 "params": { 00:06:38.131 "trtype": "pcie", 00:06:38.131 "traddr": "0000:00:10.0", 00:06:38.131 "name": "Nvme0" 00:06:38.131 }, 00:06:38.131 "method": "bdev_nvme_attach_controller" 00:06:38.131 }, 00:06:38.131 { 00:06:38.131 "method": "bdev_wait_for_examine" 00:06:38.131 } 00:06:38.131 ] 00:06:38.131 } 00:06:38.131 ] 00:06:38.131 } 00:06:38.397 [2024-07-15 16:21:23.721890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.397 [2024-07-15 16:21:23.840415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.397 [2024-07-15 16:21:23.896094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.915  Copying: 56/56 [kB] (average 54 MBps) 00:06:38.915 00:06:38.915 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:38.915 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:38.915 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.915 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.915 { 00:06:38.915 "subsystems": [ 00:06:38.915 { 00:06:38.915 "subsystem": "bdev", 00:06:38.915 "config": [ 00:06:38.915 { 00:06:38.915 "params": { 00:06:38.915 "trtype": "pcie", 00:06:38.915 "traddr": "0000:00:10.0", 00:06:38.915 "name": "Nvme0" 00:06:38.915 }, 00:06:38.915 "method": "bdev_nvme_attach_controller" 00:06:38.915 }, 00:06:38.915 { 00:06:38.915 "method": "bdev_wait_for_examine" 00:06:38.915 } 00:06:38.915 ] 00:06:38.915 } 00:06:38.915 ] 00:06:38.915 } 00:06:38.915 [2024-07-15 16:21:24.295671] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:38.915 [2024-07-15 16:21:24.296132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62704 ] 00:06:38.915 [2024-07-15 16:21:24.440118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.189 [2024-07-15 16:21:24.529415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.189 [2024-07-15 16:21:24.585687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.447  Copying: 56/56 [kB] (average 54 MBps) 00:06:39.447 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.447 16:21:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.447 [2024-07-15 16:21:24.967593] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:39.447 [2024-07-15 16:21:24.967712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62724 ] 00:06:39.447 { 00:06:39.447 "subsystems": [ 00:06:39.447 { 00:06:39.447 "subsystem": "bdev", 00:06:39.447 "config": [ 00:06:39.447 { 00:06:39.447 "params": { 00:06:39.447 "trtype": "pcie", 00:06:39.447 "traddr": "0000:00:10.0", 00:06:39.447 "name": "Nvme0" 00:06:39.447 }, 00:06:39.447 "method": "bdev_nvme_attach_controller" 00:06:39.447 }, 00:06:39.447 { 00:06:39.447 "method": "bdev_wait_for_examine" 00:06:39.447 } 00:06:39.447 ] 00:06:39.447 } 00:06:39.447 ] 00:06:39.447 } 00:06:39.704 [2024-07-15 16:21:25.105512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.704 [2024-07-15 16:21:25.209916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.962 [2024-07-15 16:21:25.262686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.221  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:40.221 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:40.221 16:21:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.788 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:40.788 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.788 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.788 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.788 [2024-07-15 16:21:26.109503] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:40.788 [2024-07-15 16:21:26.109629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62744 ] 00:06:40.788 { 00:06:40.788 "subsystems": [ 00:06:40.788 { 00:06:40.788 "subsystem": "bdev", 00:06:40.788 "config": [ 00:06:40.788 { 00:06:40.788 "params": { 00:06:40.788 "trtype": "pcie", 00:06:40.788 "traddr": "0000:00:10.0", 00:06:40.788 "name": "Nvme0" 00:06:40.788 }, 00:06:40.788 "method": "bdev_nvme_attach_controller" 00:06:40.788 }, 00:06:40.788 { 00:06:40.788 "method": "bdev_wait_for_examine" 00:06:40.788 } 00:06:40.788 ] 00:06:40.788 } 00:06:40.788 ] 00:06:40.788 } 00:06:40.788 [2024-07-15 16:21:26.248627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.046 [2024-07-15 16:21:26.356590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.046 [2024-07-15 16:21:26.412286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.304  Copying: 48/48 [kB] (average 46 MBps) 00:06:41.304 00:06:41.304 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:41.304 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:41.304 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.304 16:21:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.304 [2024-07-15 16:21:26.794151] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:41.304 [2024-07-15 16:21:26.794244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62752 ] 00:06:41.304 { 00:06:41.304 "subsystems": [ 00:06:41.304 { 00:06:41.304 "subsystem": "bdev", 00:06:41.304 "config": [ 00:06:41.304 { 00:06:41.304 "params": { 00:06:41.304 "trtype": "pcie", 00:06:41.304 "traddr": "0000:00:10.0", 00:06:41.304 "name": "Nvme0" 00:06:41.304 }, 00:06:41.304 "method": "bdev_nvme_attach_controller" 00:06:41.304 }, 00:06:41.304 { 00:06:41.304 "method": "bdev_wait_for_examine" 00:06:41.304 } 00:06:41.304 ] 00:06:41.304 } 00:06:41.304 ] 00:06:41.304 } 00:06:41.611 [2024-07-15 16:21:26.929668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.611 [2024-07-15 16:21:27.060335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.611 [2024-07-15 16:21:27.122963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.142  Copying: 48/48 [kB] (average 46 MBps) 00:06:42.142 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.142 16:21:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.142 [2024-07-15 16:21:27.527061] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:42.142 [2024-07-15 16:21:27.527171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62773 ] 00:06:42.142 { 00:06:42.142 "subsystems": [ 00:06:42.142 { 00:06:42.143 "subsystem": "bdev", 00:06:42.143 "config": [ 00:06:42.143 { 00:06:42.143 "params": { 00:06:42.143 "trtype": "pcie", 00:06:42.143 "traddr": "0000:00:10.0", 00:06:42.143 "name": "Nvme0" 00:06:42.143 }, 00:06:42.143 "method": "bdev_nvme_attach_controller" 00:06:42.143 }, 00:06:42.143 { 00:06:42.143 "method": "bdev_wait_for_examine" 00:06:42.143 } 00:06:42.143 ] 00:06:42.143 } 00:06:42.143 ] 00:06:42.143 } 00:06:42.143 [2024-07-15 16:21:27.662111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.401 [2024-07-15 16:21:27.786007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.401 [2024-07-15 16:21:27.849327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.659  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:42.659 00:06:42.659 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:42.659 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:42.659 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:42.659 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:42.659 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:42.659 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:42.659 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.225 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:43.225 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:43.225 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.225 16:21:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.225 [2024-07-15 16:21:28.699618] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:43.225 [2024-07-15 16:21:28.699724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62793 ] 00:06:43.225 { 00:06:43.225 "subsystems": [ 00:06:43.225 { 00:06:43.225 "subsystem": "bdev", 00:06:43.225 "config": [ 00:06:43.225 { 00:06:43.225 "params": { 00:06:43.225 "trtype": "pcie", 00:06:43.225 "traddr": "0000:00:10.0", 00:06:43.225 "name": "Nvme0" 00:06:43.225 }, 00:06:43.225 "method": "bdev_nvme_attach_controller" 00:06:43.225 }, 00:06:43.225 { 00:06:43.225 "method": "bdev_wait_for_examine" 00:06:43.225 } 00:06:43.225 ] 00:06:43.225 } 00:06:43.225 ] 00:06:43.225 } 00:06:43.484 [2024-07-15 16:21:28.838078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.484 [2024-07-15 16:21:28.954680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.484 [2024-07-15 16:21:29.011586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.001  Copying: 48/48 [kB] (average 46 MBps) 00:06:44.001 00:06:44.001 16:21:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:44.001 16:21:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:44.001 16:21:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.001 16:21:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.001 [2024-07-15 16:21:29.405949] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:44.001 [2024-07-15 16:21:29.406040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62812 ] 00:06:44.001 { 00:06:44.001 "subsystems": [ 00:06:44.001 { 00:06:44.001 "subsystem": "bdev", 00:06:44.001 "config": [ 00:06:44.001 { 00:06:44.001 "params": { 00:06:44.001 "trtype": "pcie", 00:06:44.001 "traddr": "0000:00:10.0", 00:06:44.001 "name": "Nvme0" 00:06:44.001 }, 00:06:44.001 "method": "bdev_nvme_attach_controller" 00:06:44.001 }, 00:06:44.001 { 00:06:44.001 "method": "bdev_wait_for_examine" 00:06:44.001 } 00:06:44.001 ] 00:06:44.001 } 00:06:44.001 ] 00:06:44.001 } 00:06:44.001 [2024-07-15 16:21:29.545909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.261 [2024-07-15 16:21:29.651586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.261 [2024-07-15 16:21:29.709948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.519  Copying: 48/48 [kB] (average 46 MBps) 00:06:44.519 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.519 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.778 [2024-07-15 16:21:30.091515] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:44.778 [2024-07-15 16:21:30.091606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62828 ] 00:06:44.778 { 00:06:44.778 "subsystems": [ 00:06:44.778 { 00:06:44.778 "subsystem": "bdev", 00:06:44.778 "config": [ 00:06:44.778 { 00:06:44.778 "params": { 00:06:44.778 "trtype": "pcie", 00:06:44.778 "traddr": "0000:00:10.0", 00:06:44.778 "name": "Nvme0" 00:06:44.778 }, 00:06:44.778 "method": "bdev_nvme_attach_controller" 00:06:44.778 }, 00:06:44.778 { 00:06:44.778 "method": "bdev_wait_for_examine" 00:06:44.778 } 00:06:44.778 ] 00:06:44.778 } 00:06:44.778 ] 00:06:44.778 } 00:06:44.778 [2024-07-15 16:21:30.230054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.088 [2024-07-15 16:21:30.331519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.088 [2024-07-15 16:21:30.387763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.347  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:45.347 00:06:45.347 ************************************ 00:06:45.347 END TEST dd_rw 00:06:45.347 ************************************ 00:06:45.347 00:06:45.347 real 0m15.684s 00:06:45.347 user 0m11.727s 00:06:45.347 sys 0m5.492s 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.347 ************************************ 00:06:45.347 START TEST dd_rw_offset 00:06:45.347 ************************************ 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:45.347 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:45.348 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=nva3pwa6uch3hbt8lyeqncg7hnb271zebvurb06iasmf8rv47f2f8m0aa101roybvwbai5czmpapyfkd70wwf6rbqwr6hhyn00pmqzz9l6uywu59r3ayyqla038njvjgx2gw54a5grfxo7xwywpi13wdcqusdvc09zgb20exiebaur1q2m0mhp5qxmrx569zuattto5y0e7og52m3v37837vajllh4zhab6zol3ucg98fsraxvp33wjrgbg7ii9ghc1c40hgdng7yc08dsb5qye7fw2v8r8wpyuny30rm28e77npl4npwrrlby7au5dqtdhl1tdmrtxv4hmzple7erpqmhviucq2y7g64o2s4pe886501hh64aecplx68ngmpf7qqu0yl9uvuix4lvh7x5viztllr2h1on4duke16kfa2bascwazem5i200k62oof3myiw1gwe5mya1qmubmwdb5tvkrnvam6afteo3rdclbot4iixat9kecb521478lo7wnjh4a8qo2xyvr3xmvtvr82cextzq01351fmrkgjig3k1aq2riccraxeu1brzhvf23g7szalbc4ehk078zojkrm8mnttq2wsxvkrhn24xcb5ecoybhziktg9swraty1gvxbg074dx7dfaznazzue0vmga0k97uagb2524mc72cz68fgc903rr6r9iyaxf3p2vsk63iaj464klp3ked51huje344i0bbgrxaj4pzzf2utsvdek2l38g9h1rdmhy2xr95bbes2m3cddllxou202pynwxee3nuu37z8f8syywf0mnp3fy20rxtloouldqa028zaf19mmytp79a21umi784ylahg3t86rk8e4u5rp7f6qv6n45ute4gkparl2e5nkd45tv9j835wgpjuexe3gtlus761pq3wiwfw4w6okba3xanwv3ocld2unjfv4kvu9hwwygsgt6d2u0ie53kzlo0zx34zt1z4dh78ht8x2288rbx2izzmp5ynhogt9tpdvet6yz5qpfrmhz4t93vfg3fzlq2xj5fsx3ew3qpadn4noyk7g66cc17sv5niyth46g4iysderew7yqle7d0eg2mwsxoi3wejy0kdingzdlhed7ligyyoepxti3gsyqcprdgkolk9lxaojspq0xvk5aqjcw54xhshvaw4hh1lk9dvrhhwir29z51vyefxw99zznknmmbsr6kad8dqprnnyhke50pehzwtuxj8x8c3zdia6oatxklfo78aul2ynqtaa3ms64g0u39x4gi3ogg0qcmfhqboj0bqtcc5htgqfrgzwfmhmprlmtz5jdb4pj5kxpc4sydap6zkh6mmj74c7zmahg6qfz8u6lf166ykdih8el117li8kdbbie8r5od860rdz55k3k7u4p8m27ocektaea55gcguxkhxowvvofennpcy6r87v4lf2k0mh2xpy228l6xfqkl9ljcidrve6lpesk0u29el5kdh7yfyy99ughf8pbnbabpjanzn7znezc4qn27zc4xrjp2zasa4sw4cgd7x8fac2rc5gt0sr7gzgwp3f46ho82j8luai1trzcy96hadubuy881jvygjpbtjor13clrszss8dzbalxhgprua643xbxynpmbm29jmjfslfzhpvgmzdfyrj1cmq7eu7tt33brjvlx4detydd732o7l1xfdxi6nz5xsx5fez107h8mzqnu8gpeqkzbat1ko7bp9969zh4jq24rwipyxc0uhhals63xf87kc60xl82b6jks3l8f053adykkfrlo4sas7yhhrw6tfdtbanthmdew520pquwuncwvtwl2rtdwtqqavra054r8iwyo5n6y58lj3t4hm9efxxav4u0qdeyal4e9x1iskqtemr3nj7a2tljl2y2nqca880t4egf8fxqawdjtdbtdnrip62dv05p76lpeu406qbk0t7wl7a63tu7bk9vx2rf4d95hp7b6mizncefdop8pqtkrndqwdsx275xjzg865u1mojxfpd73s77wmbwgkqc7iatal4szuzodg68d06a5wgc7ijmbm3tg4jgp2h17xcet74pp5wl6gjjd91kgdg8i8y1azfybonbowfl4s7rbyarg5tgsv2078mbxt6dn2o5ap2ypk8jz97bcp5eya4z201nk1g9gt8qqisac87h1d308z05tgm5yjp7814gbhboawvr370nuj2ozw5ka0dg1kqz04mh7yrxsx82jkh52q9cvw4693bttp44ujejpwkpkk5en6yrpcx58n4wwb9htnqz6my11zexklmthihq1ghpyg9a46ds2pwga6ggywmj9mnx445fmwff9tan4xaberuuroopxrrnufsc6cqabmzt0itum30rvr60ej46ursz41m32wi1aqjddjxw6kfv9fiwpii4p3g6k8xw19yks6w9dg2wh8xqwiv46vdhwa4ev5oramvkp4guh4gr9w5ua4divvr9qahzk185kfb2953zitfp70zkrraxwn35cq2v1h78pv5ldcozmqrxh5ufy8ke4b81d3t218g3vci85bw272gaplnachxchkaeurlnvk6z5rt4s2i9wk2mggliyetgom09ako2d6jn2sw920x6t22l1cflrcetmuk9msouffryv5utp1rn2w7n7a4l8er47cnam0x8jc3w4ezpdpbiftwyy064l0c4azsiht7z3ie4szwge917j5563ku8mfy3mgwf1xbnuvzsb0flauiblx71ps0j6hxty1bkg2xx3unoan3f95w4cxgi367shcrmawaq0r67xpi5ags7j26wqr7m638q2jargrczdi9xebdpnty1or8wphoh8i0rgpuwifpnk3x2w6kgu7tfsf9qz5jcybjen5rhx6m867nr8oumuy35swaj45wckmsz1yit78l34cozyy7x5ifhgces02fejgcvc90pjrhc92ohr3hbh5hphu1cbd4uohvko76jmfnijgcam8fem932ahqn8g81mq16eyv5ylexysswwbpid3rtdpog75hvs69dj4tqc47gf626t7dno7ynhfi4bp35t66segp5989nsnwrzaibn0o444pk3f8v3fzpxo4fhi2uzpsqlacrc5wnczg0gipj7jlu8977ndj99nj9y9p7bvnsx67afixqzelfg1rs902twlf18muevfc11zv342j2i7bz9zqdvn12ktgot4nv56idv4cyxromi05jtqyhs8wtnbxu1tps4pnt3zhlc41489ekx6281rxr8u54ohgrrw58at5ztkul1x86mqy4qhdw3x6vrohyln7o2pxhp8fm6s7dod6uudamx4r32cv3t48hv0ha45a3rc55v65cqtqcwpbfq8boik0nn1xmaumsc0m4c85e2t62npgy6i7mrr5rcihwkskgwepcwtjkkgksti9ldhr18x9rjo3wqlhvrfjttwnd6ep4loy5r836l4wo2y642gfgrn22xwvr4km49c7wo6uss5jv73bbjhd6f6t8d3kz9p2irq1fhdks89uyflmj5o9dt5ufmjzzh4k4rhi647bc5l5zwaw9bh8390t70tnwvb5z12aq4vpqoneec54g0xulsaop8apsa038rc8redrj9szufq6a96aex8ys4j8vzgx3q2ekll2c3xaudnn9swhxsh31vudd28e6q86ux5gjmptxyd43wharkc4479xe3uj25g4wrp3s9md51ptpi0ujqo9mmaw7pfy9uoda9me2q44bd649pwaxb485hnam8moqazvrfdmemrw7wt88pu2j3u8av7lrolqnw6u1rxa9xn5hdha0zzjnudlgzr943d5kvmye6pnx6mt9qpnhunstr62onxbmbaomngcv1jciu10lalbmqigqb5ui5agmr8webdwfdj1q0zsifj14nbaa0qsp42evq76l59aqhy9blnopkawtsdpmiv4h71m24ourmwilm4flf5pf8ifwgk5zxctckj0uebr9pqz2dzigrgiexo2juzj14ajcfpcci5a3agncnr79 00:06:45.348 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:45.348 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:45.348 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:45.348 16:21:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:45.348 [2024-07-15 16:21:30.893569] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:45.348 [2024-07-15 16:21:30.893655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62858 ] 00:06:45.606 { 00:06:45.606 "subsystems": [ 00:06:45.606 { 00:06:45.606 "subsystem": "bdev", 00:06:45.606 "config": [ 00:06:45.606 { 00:06:45.606 "params": { 00:06:45.606 "trtype": "pcie", 00:06:45.606 "traddr": "0000:00:10.0", 00:06:45.606 "name": "Nvme0" 00:06:45.606 }, 00:06:45.606 "method": "bdev_nvme_attach_controller" 00:06:45.606 }, 00:06:45.606 { 00:06:45.606 "method": "bdev_wait_for_examine" 00:06:45.606 } 00:06:45.606 ] 00:06:45.606 } 00:06:45.606 ] 00:06:45.606 } 00:06:45.606 [2024-07-15 16:21:31.032073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.606 [2024-07-15 16:21:31.140552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.865 [2024-07-15 16:21:31.196958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.124  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:46.124 00:06:46.124 16:21:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:46.124 16:21:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:46.124 16:21:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:46.124 16:21:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:46.124 [2024-07-15 16:21:31.585128] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:46.124 [2024-07-15 16:21:31.585216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:06:46.124 { 00:06:46.124 "subsystems": [ 00:06:46.124 { 00:06:46.124 "subsystem": "bdev", 00:06:46.124 "config": [ 00:06:46.124 { 00:06:46.124 "params": { 00:06:46.124 "trtype": "pcie", 00:06:46.124 "traddr": "0000:00:10.0", 00:06:46.124 "name": "Nvme0" 00:06:46.124 }, 00:06:46.124 "method": "bdev_nvme_attach_controller" 00:06:46.124 }, 00:06:46.124 { 00:06:46.124 "method": "bdev_wait_for_examine" 00:06:46.124 } 00:06:46.124 ] 00:06:46.124 } 00:06:46.124 ] 00:06:46.124 } 00:06:46.383 [2024-07-15 16:21:31.723073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.383 [2024-07-15 16:21:31.834001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.383 [2024-07-15 16:21:31.890727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.902  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:46.902 00:06:46.902 ************************************ 00:06:46.902 END TEST dd_rw_offset 00:06:46.902 ************************************ 00:06:46.902 16:21:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ nva3pwa6uch3hbt8lyeqncg7hnb271zebvurb06iasmf8rv47f2f8m0aa101roybvwbai5czmpapyfkd70wwf6rbqwr6hhyn00pmqzz9l6uywu59r3ayyqla038njvjgx2gw54a5grfxo7xwywpi13wdcqusdvc09zgb20exiebaur1q2m0mhp5qxmrx569zuattto5y0e7og52m3v37837vajllh4zhab6zol3ucg98fsraxvp33wjrgbg7ii9ghc1c40hgdng7yc08dsb5qye7fw2v8r8wpyuny30rm28e77npl4npwrrlby7au5dqtdhl1tdmrtxv4hmzple7erpqmhviucq2y7g64o2s4pe886501hh64aecplx68ngmpf7qqu0yl9uvuix4lvh7x5viztllr2h1on4duke16kfa2bascwazem5i200k62oof3myiw1gwe5mya1qmubmwdb5tvkrnvam6afteo3rdclbot4iixat9kecb521478lo7wnjh4a8qo2xyvr3xmvtvr82cextzq01351fmrkgjig3k1aq2riccraxeu1brzhvf23g7szalbc4ehk078zojkrm8mnttq2wsxvkrhn24xcb5ecoybhziktg9swraty1gvxbg074dx7dfaznazzue0vmga0k97uagb2524mc72cz68fgc903rr6r9iyaxf3p2vsk63iaj464klp3ked51huje344i0bbgrxaj4pzzf2utsvdek2l38g9h1rdmhy2xr95bbes2m3cddllxou202pynwxee3nuu37z8f8syywf0mnp3fy20rxtloouldqa028zaf19mmytp79a21umi784ylahg3t86rk8e4u5rp7f6qv6n45ute4gkparl2e5nkd45tv9j835wgpjuexe3gtlus761pq3wiwfw4w6okba3xanwv3ocld2unjfv4kvu9hwwygsgt6d2u0ie53kzlo0zx34zt1z4dh78ht8x2288rbx2izzmp5ynhogt9tpdvet6yz5qpfrmhz4t93vfg3fzlq2xj5fsx3ew3qpadn4noyk7g66cc17sv5niyth46g4iysderew7yqle7d0eg2mwsxoi3wejy0kdingzdlhed7ligyyoepxti3gsyqcprdgkolk9lxaojspq0xvk5aqjcw54xhshvaw4hh1lk9dvrhhwir29z51vyefxw99zznknmmbsr6kad8dqprnnyhke50pehzwtuxj8x8c3zdia6oatxklfo78aul2ynqtaa3ms64g0u39x4gi3ogg0qcmfhqboj0bqtcc5htgqfrgzwfmhmprlmtz5jdb4pj5kxpc4sydap6zkh6mmj74c7zmahg6qfz8u6lf166ykdih8el117li8kdbbie8r5od860rdz55k3k7u4p8m27ocektaea55gcguxkhxowvvofennpcy6r87v4lf2k0mh2xpy228l6xfqkl9ljcidrve6lpesk0u29el5kdh7yfyy99ughf8pbnbabpjanzn7znezc4qn27zc4xrjp2zasa4sw4cgd7x8fac2rc5gt0sr7gzgwp3f46ho82j8luai1trzcy96hadubuy881jvygjpbtjor13clrszss8dzbalxhgprua643xbxynpmbm29jmjfslfzhpvgmzdfyrj1cmq7eu7tt33brjvlx4detydd732o7l1xfdxi6nz5xsx5fez107h8mzqnu8gpeqkzbat1ko7bp9969zh4jq24rwipyxc0uhhals63xf87kc60xl82b6jks3l8f053adykkfrlo4sas7yhhrw6tfdtbanthmdew520pquwuncwvtwl2rtdwtqqavra054r8iwyo5n6y58lj3t4hm9efxxav4u0qdeyal4e9x1iskqtemr3nj7a2tljl2y2nqca880t4egf8fxqawdjtdbtdnrip62dv05p76lpeu406qbk0t7wl7a63tu7bk9vx2rf4d95hp7b6mizncefdop8pqtkrndqwdsx275xjzg865u1mojxfpd73s77wmbwgkqc7iatal4szuzodg68d06a5wgc7ijmbm3tg4jgp2h17xcet74pp5wl6gjjd91kgdg8i8y1azfybonbowfl4s7rbyarg5tgsv2078mbxt6dn2o5ap2ypk8jz97bcp5eya4z201nk1g9gt8qqisac87h1d308z05tgm5yjp7814gbhboawvr370nuj2ozw5ka0dg1kqz04mh7yrxsx82jkh52q9cvw4693bttp44ujejpwkpkk5en6yrpcx58n4wwb9htnqz6my11zexklmthihq1ghpyg9a46ds2pwga6ggywmj9mnx445fmwff9tan4xaberuuroopxrrnufsc6cqabmzt0itum30rvr60ej46ursz41m32wi1aqjddjxw6kfv9fiwpii4p3g6k8xw19yks6w9dg2wh8xqwiv46vdhwa4ev5oramvkp4guh4gr9w5ua4divvr9qahzk185kfb2953zitfp70zkrraxwn35cq2v1h78pv5ldcozmqrxh5ufy8ke4b81d3t218g3vci85bw272gaplnachxchkaeurlnvk6z5rt4s2i9wk2mggliyetgom09ako2d6jn2sw920x6t22l1cflrcetmuk9msouffryv5utp1rn2w7n7a4l8er47cnam0x8jc3w4ezpdpbiftwyy064l0c4azsiht7z3ie4szwge917j5563ku8mfy3mgwf1xbnuvzsb0flauiblx71ps0j6hxty1bkg2xx3unoan3f95w4cxgi367shcrmawaq0r67xpi5ags7j26wqr7m638q2jargrczdi9xebdpnty1or8wphoh8i0rgpuwifpnk3x2w6kgu7tfsf9qz5jcybjen5rhx6m867nr8oumuy35swaj45wckmsz1yit78l34cozyy7x5ifhgces02fejgcvc90pjrhc92ohr3hbh5hphu1cbd4uohvko76jmfnijgcam8fem932ahqn8g81mq16eyv5ylexysswwbpid3rtdpog75hvs69dj4tqc47gf626t7dno7ynhfi4bp35t66segp5989nsnwrzaibn0o444pk3f8v3fzpxo4fhi2uzpsqlacrc5wnczg0gipj7jlu8977ndj99nj9y9p7bvnsx67afixqzelfg1rs902twlf18muevfc11zv342j2i7bz9zqdvn12ktgot4nv56idv4cyxromi05jtqyhs8wtnbxu1tps4pnt3zhlc41489ekx6281rxr8u54ohgrrw58at5ztkul1x86mqy4qhdw3x6vrohyln7o2pxhp8fm6s7dod6uudamx4r32cv3t48hv0ha45a3rc55v65cqtqcwpbfq8boik0nn1xmaumsc0m4c85e2t62npgy6i7mrr5rcihwkskgwepcwtjkkgksti9ldhr18x9rjo3wqlhvrfjttwnd6ep4loy5r836l4wo2y642gfgrn22xwvr4km49c7wo6uss5jv73bbjhd6f6t8d3kz9p2irq1fhdks89uyflmj5o9dt5ufmjzzh4k4rhi647bc5l5zwaw9bh8390t70tnwvb5z12aq4vpqoneec54g0xulsaop8apsa038rc8redrj9szufq6a96aex8ys4j8vzgx3q2ekll2c3xaudnn9swhxsh31vudd28e6q86ux5gjmptxyd43wharkc4479xe3uj25g4wrp3s9md51ptpi0ujqo9mmaw7pfy9uoda9me2q44bd649pwaxb485hnam8moqazvrfdmemrw7wt88pu2j3u8av7lrolqnw6u1rxa9xn5hdha0zzjnudlgzr943d5kvmye6pnx6mt9qpnhunstr62onxbmbaomngcv1jciu10lalbmqigqb5ui5agmr8webdwfdj1q0zsifj14nbaa0qsp42evq76l59aqhy9blnopkawtsdpmiv4h71m24ourmwilm4flf5pf8ifwgk5zxctckj0uebr9pqz2dzigrgiexo2juzj14ajcfpcci5a3agncnr79 == \n\v\a\3\p\w\a\6\u\c\h\3\h\b\t\8\l\y\e\q\n\c\g\7\h\n\b\2\7\1\z\e\b\v\u\r\b\0\6\i\a\s\m\f\8\r\v\4\7\f\2\f\8\m\0\a\a\1\0\1\r\o\y\b\v\w\b\a\i\5\c\z\m\p\a\p\y\f\k\d\7\0\w\w\f\6\r\b\q\w\r\6\h\h\y\n\0\0\p\m\q\z\z\9\l\6\u\y\w\u\5\9\r\3\a\y\y\q\l\a\0\3\8\n\j\v\j\g\x\2\g\w\5\4\a\5\g\r\f\x\o\7\x\w\y\w\p\i\1\3\w\d\c\q\u\s\d\v\c\0\9\z\g\b\2\0\e\x\i\e\b\a\u\r\1\q\2\m\0\m\h\p\5\q\x\m\r\x\5\6\9\z\u\a\t\t\t\o\5\y\0\e\7\o\g\5\2\m\3\v\3\7\8\3\7\v\a\j\l\l\h\4\z\h\a\b\6\z\o\l\3\u\c\g\9\8\f\s\r\a\x\v\p\3\3\w\j\r\g\b\g\7\i\i\9\g\h\c\1\c\4\0\h\g\d\n\g\7\y\c\0\8\d\s\b\5\q\y\e\7\f\w\2\v\8\r\8\w\p\y\u\n\y\3\0\r\m\2\8\e\7\7\n\p\l\4\n\p\w\r\r\l\b\y\7\a\u\5\d\q\t\d\h\l\1\t\d\m\r\t\x\v\4\h\m\z\p\l\e\7\e\r\p\q\m\h\v\i\u\c\q\2\y\7\g\6\4\o\2\s\4\p\e\8\8\6\5\0\1\h\h\6\4\a\e\c\p\l\x\6\8\n\g\m\p\f\7\q\q\u\0\y\l\9\u\v\u\i\x\4\l\v\h\7\x\5\v\i\z\t\l\l\r\2\h\1\o\n\4\d\u\k\e\1\6\k\f\a\2\b\a\s\c\w\a\z\e\m\5\i\2\0\0\k\6\2\o\o\f\3\m\y\i\w\1\g\w\e\5\m\y\a\1\q\m\u\b\m\w\d\b\5\t\v\k\r\n\v\a\m\6\a\f\t\e\o\3\r\d\c\l\b\o\t\4\i\i\x\a\t\9\k\e\c\b\5\2\1\4\7\8\l\o\7\w\n\j\h\4\a\8\q\o\2\x\y\v\r\3\x\m\v\t\v\r\8\2\c\e\x\t\z\q\0\1\3\5\1\f\m\r\k\g\j\i\g\3\k\1\a\q\2\r\i\c\c\r\a\x\e\u\1\b\r\z\h\v\f\2\3\g\7\s\z\a\l\b\c\4\e\h\k\0\7\8\z\o\j\k\r\m\8\m\n\t\t\q\2\w\s\x\v\k\r\h\n\2\4\x\c\b\5\e\c\o\y\b\h\z\i\k\t\g\9\s\w\r\a\t\y\1\g\v\x\b\g\0\7\4\d\x\7\d\f\a\z\n\a\z\z\u\e\0\v\m\g\a\0\k\9\7\u\a\g\b\2\5\2\4\m\c\7\2\c\z\6\8\f\g\c\9\0\3\r\r\6\r\9\i\y\a\x\f\3\p\2\v\s\k\6\3\i\a\j\4\6\4\k\l\p\3\k\e\d\5\1\h\u\j\e\3\4\4\i\0\b\b\g\r\x\a\j\4\p\z\z\f\2\u\t\s\v\d\e\k\2\l\3\8\g\9\h\1\r\d\m\h\y\2\x\r\9\5\b\b\e\s\2\m\3\c\d\d\l\l\x\o\u\2\0\2\p\y\n\w\x\e\e\3\n\u\u\3\7\z\8\f\8\s\y\y\w\f\0\m\n\p\3\f\y\2\0\r\x\t\l\o\o\u\l\d\q\a\0\2\8\z\a\f\1\9\m\m\y\t\p\7\9\a\2\1\u\m\i\7\8\4\y\l\a\h\g\3\t\8\6\r\k\8\e\4\u\5\r\p\7\f\6\q\v\6\n\4\5\u\t\e\4\g\k\p\a\r\l\2\e\5\n\k\d\4\5\t\v\9\j\8\3\5\w\g\p\j\u\e\x\e\3\g\t\l\u\s\7\6\1\p\q\3\w\i\w\f\w\4\w\6\o\k\b\a\3\x\a\n\w\v\3\o\c\l\d\2\u\n\j\f\v\4\k\v\u\9\h\w\w\y\g\s\g\t\6\d\2\u\0\i\e\5\3\k\z\l\o\0\z\x\3\4\z\t\1\z\4\d\h\7\8\h\t\8\x\2\2\8\8\r\b\x\2\i\z\z\m\p\5\y\n\h\o\g\t\9\t\p\d\v\e\t\6\y\z\5\q\p\f\r\m\h\z\4\t\9\3\v\f\g\3\f\z\l\q\2\x\j\5\f\s\x\3\e\w\3\q\p\a\d\n\4\n\o\y\k\7\g\6\6\c\c\1\7\s\v\5\n\i\y\t\h\4\6\g\4\i\y\s\d\e\r\e\w\7\y\q\l\e\7\d\0\e\g\2\m\w\s\x\o\i\3\w\e\j\y\0\k\d\i\n\g\z\d\l\h\e\d\7\l\i\g\y\y\o\e\p\x\t\i\3\g\s\y\q\c\p\r\d\g\k\o\l\k\9\l\x\a\o\j\s\p\q\0\x\v\k\5\a\q\j\c\w\5\4\x\h\s\h\v\a\w\4\h\h\1\l\k\9\d\v\r\h\h\w\i\r\2\9\z\5\1\v\y\e\f\x\w\9\9\z\z\n\k\n\m\m\b\s\r\6\k\a\d\8\d\q\p\r\n\n\y\h\k\e\5\0\p\e\h\z\w\t\u\x\j\8\x\8\c\3\z\d\i\a\6\o\a\t\x\k\l\f\o\7\8\a\u\l\2\y\n\q\t\a\a\3\m\s\6\4\g\0\u\3\9\x\4\g\i\3\o\g\g\0\q\c\m\f\h\q\b\o\j\0\b\q\t\c\c\5\h\t\g\q\f\r\g\z\w\f\m\h\m\p\r\l\m\t\z\5\j\d\b\4\p\j\5\k\x\p\c\4\s\y\d\a\p\6\z\k\h\6\m\m\j\7\4\c\7\z\m\a\h\g\6\q\f\z\8\u\6\l\f\1\6\6\y\k\d\i\h\8\e\l\1\1\7\l\i\8\k\d\b\b\i\e\8\r\5\o\d\8\6\0\r\d\z\5\5\k\3\k\7\u\4\p\8\m\2\7\o\c\e\k\t\a\e\a\5\5\g\c\g\u\x\k\h\x\o\w\v\v\o\f\e\n\n\p\c\y\6\r\8\7\v\4\l\f\2\k\0\m\h\2\x\p\y\2\2\8\l\6\x\f\q\k\l\9\l\j\c\i\d\r\v\e\6\l\p\e\s\k\0\u\2\9\e\l\5\k\d\h\7\y\f\y\y\9\9\u\g\h\f\8\p\b\n\b\a\b\p\j\a\n\z\n\7\z\n\e\z\c\4\q\n\2\7\z\c\4\x\r\j\p\2\z\a\s\a\4\s\w\4\c\g\d\7\x\8\f\a\c\2\r\c\5\g\t\0\s\r\7\g\z\g\w\p\3\f\4\6\h\o\8\2\j\8\l\u\a\i\1\t\r\z\c\y\9\6\h\a\d\u\b\u\y\8\8\1\j\v\y\g\j\p\b\t\j\o\r\1\3\c\l\r\s\z\s\s\8\d\z\b\a\l\x\h\g\p\r\u\a\6\4\3\x\b\x\y\n\p\m\b\m\2\9\j\m\j\f\s\l\f\z\h\p\v\g\m\z\d\f\y\r\j\1\c\m\q\7\e\u\7\t\t\3\3\b\r\j\v\l\x\4\d\e\t\y\d\d\7\3\2\o\7\l\1\x\f\d\x\i\6\n\z\5\x\s\x\5\f\e\z\1\0\7\h\8\m\z\q\n\u\8\g\p\e\q\k\z\b\a\t\1\k\o\7\b\p\9\9\6\9\z\h\4\j\q\2\4\r\w\i\p\y\x\c\0\u\h\h\a\l\s\6\3\x\f\8\7\k\c\6\0\x\l\8\2\b\6\j\k\s\3\l\8\f\0\5\3\a\d\y\k\k\f\r\l\o\4\s\a\s\7\y\h\h\r\w\6\t\f\d\t\b\a\n\t\h\m\d\e\w\5\2\0\p\q\u\w\u\n\c\w\v\t\w\l\2\r\t\d\w\t\q\q\a\v\r\a\0\5\4\r\8\i\w\y\o\5\n\6\y\5\8\l\j\3\t\4\h\m\9\e\f\x\x\a\v\4\u\0\q\d\e\y\a\l\4\e\9\x\1\i\s\k\q\t\e\m\r\3\n\j\7\a\2\t\l\j\l\2\y\2\n\q\c\a\8\8\0\t\4\e\g\f\8\f\x\q\a\w\d\j\t\d\b\t\d\n\r\i\p\6\2\d\v\0\5\p\7\6\l\p\e\u\4\0\6\q\b\k\0\t\7\w\l\7\a\6\3\t\u\7\b\k\9\v\x\2\r\f\4\d\9\5\h\p\7\b\6\m\i\z\n\c\e\f\d\o\p\8\p\q\t\k\r\n\d\q\w\d\s\x\2\7\5\x\j\z\g\8\6\5\u\1\m\o\j\x\f\p\d\7\3\s\7\7\w\m\b\w\g\k\q\c\7\i\a\t\a\l\4\s\z\u\z\o\d\g\6\8\d\0\6\a\5\w\g\c\7\i\j\m\b\m\3\t\g\4\j\g\p\2\h\1\7\x\c\e\t\7\4\p\p\5\w\l\6\g\j\j\d\9\1\k\g\d\g\8\i\8\y\1\a\z\f\y\b\o\n\b\o\w\f\l\4\s\7\r\b\y\a\r\g\5\t\g\s\v\2\0\7\8\m\b\x\t\6\d\n\2\o\5\a\p\2\y\p\k\8\j\z\9\7\b\c\p\5\e\y\a\4\z\2\0\1\n\k\1\g\9\g\t\8\q\q\i\s\a\c\8\7\h\1\d\3\0\8\z\0\5\t\g\m\5\y\j\p\7\8\1\4\g\b\h\b\o\a\w\v\r\3\7\0\n\u\j\2\o\z\w\5\k\a\0\d\g\1\k\q\z\0\4\m\h\7\y\r\x\s\x\8\2\j\k\h\5\2\q\9\c\v\w\4\6\9\3\b\t\t\p\4\4\u\j\e\j\p\w\k\p\k\k\5\e\n\6\y\r\p\c\x\5\8\n\4\w\w\b\9\h\t\n\q\z\6\m\y\1\1\z\e\x\k\l\m\t\h\i\h\q\1\g\h\p\y\g\9\a\4\6\d\s\2\p\w\g\a\6\g\g\y\w\m\j\9\m\n\x\4\4\5\f\m\w\f\f\9\t\a\n\4\x\a\b\e\r\u\u\r\o\o\p\x\r\r\n\u\f\s\c\6\c\q\a\b\m\z\t\0\i\t\u\m\3\0\r\v\r\6\0\e\j\4\6\u\r\s\z\4\1\m\3\2\w\i\1\a\q\j\d\d\j\x\w\6\k\f\v\9\f\i\w\p\i\i\4\p\3\g\6\k\8\x\w\1\9\y\k\s\6\w\9\d\g\2\w\h\8\x\q\w\i\v\4\6\v\d\h\w\a\4\e\v\5\o\r\a\m\v\k\p\4\g\u\h\4\g\r\9\w\5\u\a\4\d\i\v\v\r\9\q\a\h\z\k\1\8\5\k\f\b\2\9\5\3\z\i\t\f\p\7\0\z\k\r\r\a\x\w\n\3\5\c\q\2\v\1\h\7\8\p\v\5\l\d\c\o\z\m\q\r\x\h\5\u\f\y\8\k\e\4\b\8\1\d\3\t\2\1\8\g\3\v\c\i\8\5\b\w\2\7\2\g\a\p\l\n\a\c\h\x\c\h\k\a\e\u\r\l\n\v\k\6\z\5\r\t\4\s\2\i\9\w\k\2\m\g\g\l\i\y\e\t\g\o\m\0\9\a\k\o\2\d\6\j\n\2\s\w\9\2\0\x\6\t\2\2\l\1\c\f\l\r\c\e\t\m\u\k\9\m\s\o\u\f\f\r\y\v\5\u\t\p\1\r\n\2\w\7\n\7\a\4\l\8\e\r\4\7\c\n\a\m\0\x\8\j\c\3\w\4\e\z\p\d\p\b\i\f\t\w\y\y\0\6\4\l\0\c\4\a\z\s\i\h\t\7\z\3\i\e\4\s\z\w\g\e\9\1\7\j\5\5\6\3\k\u\8\m\f\y\3\m\g\w\f\1\x\b\n\u\v\z\s\b\0\f\l\a\u\i\b\l\x\7\1\p\s\0\j\6\h\x\t\y\1\b\k\g\2\x\x\3\u\n\o\a\n\3\f\9\5\w\4\c\x\g\i\3\6\7\s\h\c\r\m\a\w\a\q\0\r\6\7\x\p\i\5\a\g\s\7\j\2\6\w\q\r\7\m\6\3\8\q\2\j\a\r\g\r\c\z\d\i\9\x\e\b\d\p\n\t\y\1\o\r\8\w\p\h\o\h\8\i\0\r\g\p\u\w\i\f\p\n\k\3\x\2\w\6\k\g\u\7\t\f\s\f\9\q\z\5\j\c\y\b\j\e\n\5\r\h\x\6\m\8\6\7\n\r\8\o\u\m\u\y\3\5\s\w\a\j\4\5\w\c\k\m\s\z\1\y\i\t\7\8\l\3\4\c\o\z\y\y\7\x\5\i\f\h\g\c\e\s\0\2\f\e\j\g\c\v\c\9\0\p\j\r\h\c\9\2\o\h\r\3\h\b\h\5\h\p\h\u\1\c\b\d\4\u\o\h\v\k\o\7\6\j\m\f\n\i\j\g\c\a\m\8\f\e\m\9\3\2\a\h\q\n\8\g\8\1\m\q\1\6\e\y\v\5\y\l\e\x\y\s\s\w\w\b\p\i\d\3\r\t\d\p\o\g\7\5\h\v\s\6\9\d\j\4\t\q\c\4\7\g\f\6\2\6\t\7\d\n\o\7\y\n\h\f\i\4\b\p\3\5\t\6\6\s\e\g\p\5\9\8\9\n\s\n\w\r\z\a\i\b\n\0\o\4\4\4\p\k\3\f\8\v\3\f\z\p\x\o\4\f\h\i\2\u\z\p\s\q\l\a\c\r\c\5\w\n\c\z\g\0\g\i\p\j\7\j\l\u\8\9\7\7\n\d\j\9\9\n\j\9\y\9\p\7\b\v\n\s\x\6\7\a\f\i\x\q\z\e\l\f\g\1\r\s\9\0\2\t\w\l\f\1\8\m\u\e\v\f\c\1\1\z\v\3\4\2\j\2\i\7\b\z\9\z\q\d\v\n\1\2\k\t\g\o\t\4\n\v\5\6\i\d\v\4\c\y\x\r\o\m\i\0\5\j\t\q\y\h\s\8\w\t\n\b\x\u\1\t\p\s\4\p\n\t\3\z\h\l\c\4\1\4\8\9\e\k\x\6\2\8\1\r\x\r\8\u\5\4\o\h\g\r\r\w\5\8\a\t\5\z\t\k\u\l\1\x\8\6\m\q\y\4\q\h\d\w\3\x\6\v\r\o\h\y\l\n\7\o\2\p\x\h\p\8\f\m\6\s\7\d\o\d\6\u\u\d\a\m\x\4\r\3\2\c\v\3\t\4\8\h\v\0\h\a\4\5\a\3\r\c\5\5\v\6\5\c\q\t\q\c\w\p\b\f\q\8\b\o\i\k\0\n\n\1\x\m\a\u\m\s\c\0\m\4\c\8\5\e\2\t\6\2\n\p\g\y\6\i\7\m\r\r\5\r\c\i\h\w\k\s\k\g\w\e\p\c\w\t\j\k\k\g\k\s\t\i\9\l\d\h\r\1\8\x\9\r\j\o\3\w\q\l\h\v\r\f\j\t\t\w\n\d\6\e\p\4\l\o\y\5\r\8\3\6\l\4\w\o\2\y\6\4\2\g\f\g\r\n\2\2\x\w\v\r\4\k\m\4\9\c\7\w\o\6\u\s\s\5\j\v\7\3\b\b\j\h\d\6\f\6\t\8\d\3\k\z\9\p\2\i\r\q\1\f\h\d\k\s\8\9\u\y\f\l\m\j\5\o\9\d\t\5\u\f\m\j\z\z\h\4\k\4\r\h\i\6\4\7\b\c\5\l\5\z\w\a\w\9\b\h\8\3\9\0\t\7\0\t\n\w\v\b\5\z\1\2\a\q\4\v\p\q\o\n\e\e\c\5\4\g\0\x\u\l\s\a\o\p\8\a\p\s\a\0\3\8\r\c\8\r\e\d\r\j\9\s\z\u\f\q\6\a\9\6\a\e\x\8\y\s\4\j\8\v\z\g\x\3\q\2\e\k\l\l\2\c\3\x\a\u\d\n\n\9\s\w\h\x\s\h\3\1\v\u\d\d\2\8\e\6\q\8\6\u\x\5\g\j\m\p\t\x\y\d\4\3\w\h\a\r\k\c\4\4\7\9\x\e\3\u\j\2\5\g\4\w\r\p\3\s\9\m\d\5\1\p\t\p\i\0\u\j\q\o\9\m\m\a\w\7\p\f\y\9\u\o\d\a\9\m\e\2\q\4\4\b\d\6\4\9\p\w\a\x\b\4\8\5\h\n\a\m\8\m\o\q\a\z\v\r\f\d\m\e\m\r\w\7\w\t\8\8\p\u\2\j\3\u\8\a\v\7\l\r\o\l\q\n\w\6\u\1\r\x\a\9\x\n\5\h\d\h\a\0\z\z\j\n\u\d\l\g\z\r\9\4\3\d\5\k\v\m\y\e\6\p\n\x\6\m\t\9\q\p\n\h\u\n\s\t\r\6\2\o\n\x\b\m\b\a\o\m\n\g\c\v\1\j\c\i\u\1\0\l\a\l\b\m\q\i\g\q\b\5\u\i\5\a\g\m\r\8\w\e\b\d\w\f\d\j\1\q\0\z\s\i\f\j\1\4\n\b\a\a\0\q\s\p\4\2\e\v\q\7\6\l\5\9\a\q\h\y\9\b\l\n\o\p\k\a\w\t\s\d\p\m\i\v\4\h\7\1\m\2\4\o\u\r\m\w\i\l\m\4\f\l\f\5\p\f\8\i\f\w\g\k\5\z\x\c\t\c\k\j\0\u\e\b\r\9\p\q\z\2\d\z\i\g\r\g\i\e\x\o\2\j\u\z\j\1\4\a\j\c\f\p\c\c\i\5\a\3\a\g\n\c\n\r\7\9 ]] 00:06:46.903 00:06:46.903 real 0m1.418s 00:06:46.903 user 0m0.977s 00:06:46.903 sys 0m0.629s 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.903 16:21:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.903 [2024-07-15 16:21:32.302602] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:46.903 [2024-07-15 16:21:32.302689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62907 ] 00:06:46.903 { 00:06:46.903 "subsystems": [ 00:06:46.903 { 00:06:46.903 "subsystem": "bdev", 00:06:46.903 "config": [ 00:06:46.903 { 00:06:46.903 "params": { 00:06:46.903 "trtype": "pcie", 00:06:46.903 "traddr": "0000:00:10.0", 00:06:46.903 "name": "Nvme0" 00:06:46.903 }, 00:06:46.903 "method": "bdev_nvme_attach_controller" 00:06:46.903 }, 00:06:46.903 { 00:06:46.903 "method": "bdev_wait_for_examine" 00:06:46.903 } 00:06:46.903 ] 00:06:46.903 } 00:06:46.903 ] 00:06:46.903 } 00:06:46.903 [2024-07-15 16:21:32.432594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.162 [2024-07-15 16:21:32.525004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.162 [2024-07-15 16:21:32.581741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.421  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:47.421 00:06:47.421 16:21:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.421 ************************************ 00:06:47.421 END TEST spdk_dd_basic_rw 00:06:47.421 ************************************ 00:06:47.421 00:06:47.421 real 0m18.905s 00:06:47.421 user 0m13.805s 00:06:47.421 sys 0m6.737s 00:06:47.421 16:21:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.421 16:21:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.421 16:21:32 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:47.421 16:21:32 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:47.421 16:21:32 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.421 16:21:32 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.421 16:21:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:47.421 ************************************ 00:06:47.421 START TEST spdk_dd_posix 00:06:47.421 ************************************ 00:06:47.421 16:21:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:47.680 * Looking for test storage... 00:06:47.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:47.680 16:21:33 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.680 16:21:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.680 16:21:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.680 16:21:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.680 16:21:33 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.680 16:21:33 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:47.681 * First test run, liburing in use 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:47.681 ************************************ 00:06:47.681 START TEST dd_flag_append 00:06:47.681 ************************************ 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=cq6efd0f5px1tg5rq4h05pne4fcnkzib 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=0c6y4wj0963zgtofcf6k7zvdhpoijt7n 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s cq6efd0f5px1tg5rq4h05pne4fcnkzib 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 0c6y4wj0963zgtofcf6k7zvdhpoijt7n 00:06:47.681 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:47.681 [2024-07-15 16:21:33.113446] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:47.681 [2024-07-15 16:21:33.113529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62971 ] 00:06:47.940 [2024-07-15 16:21:33.247110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.940 [2024-07-15 16:21:33.354231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.940 [2024-07-15 16:21:33.410820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.199  Copying: 32/32 [B] (average 31 kBps) 00:06:48.199 00:06:48.199 ************************************ 00:06:48.199 END TEST dd_flag_append 00:06:48.199 ************************************ 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 0c6y4wj0963zgtofcf6k7zvdhpoijt7ncq6efd0f5px1tg5rq4h05pne4fcnkzib == \0\c\6\y\4\w\j\0\9\6\3\z\g\t\o\f\c\f\6\k\7\z\v\d\h\p\o\i\j\t\7\n\c\q\6\e\f\d\0\f\5\p\x\1\t\g\5\r\q\4\h\0\5\p\n\e\4\f\c\n\k\z\i\b ]] 00:06:48.199 00:06:48.199 real 0m0.587s 00:06:48.199 user 0m0.331s 00:06:48.199 sys 0m0.284s 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:48.199 ************************************ 00:06:48.199 START TEST dd_flag_directory 00:06:48.199 ************************************ 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.199 16:21:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.458 [2024-07-15 16:21:33.750775] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:48.458 [2024-07-15 16:21:33.750888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62999 ] 00:06:48.458 [2024-07-15 16:21:33.884888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.458 [2024-07-15 16:21:33.995372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.723 [2024-07-15 16:21:34.053443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.723 [2024-07-15 16:21:34.088047] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:48.723 [2024-07-15 16:21:34.088101] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:48.723 [2024-07-15 16:21:34.088129] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.723 [2024-07-15 16:21:34.200878] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:49.001 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:49.001 [2024-07-15 16:21:34.345651] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:49.002 [2024-07-15 16:21:34.345761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63009 ] 00:06:49.002 [2024-07-15 16:21:34.483787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.273 [2024-07-15 16:21:34.575054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.273 [2024-07-15 16:21:34.630359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.273 [2024-07-15 16:21:34.664029] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:49.273 [2024-07-15 16:21:34.664083] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:49.273 [2024-07-15 16:21:34.664113] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.273 [2024-07-15 16:21:34.777702] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:49.532 ************************************ 00:06:49.532 END TEST dd_flag_directory 00:06:49.532 ************************************ 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.532 00:06:49.532 real 0m1.177s 00:06:49.532 user 0m0.666s 00:06:49.532 sys 0m0.301s 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:49.532 ************************************ 00:06:49.532 START TEST dd_flag_nofollow 00:06:49.532 ************************************ 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:49.532 16:21:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.532 [2024-07-15 16:21:34.999929] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:49.532 [2024-07-15 16:21:35.000698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63037 ] 00:06:49.791 [2024-07-15 16:21:35.135753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.791 [2024-07-15 16:21:35.235852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.792 [2024-07-15 16:21:35.290142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.792 [2024-07-15 16:21:35.321445] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:49.792 [2024-07-15 16:21:35.321496] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:49.792 [2024-07-15 16:21:35.321527] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.051 [2024-07-15 16:21:35.436939] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.051 16:21:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:50.051 [2024-07-15 16:21:35.579189] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:50.051 [2024-07-15 16:21:35.579295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63052 ] 00:06:50.311 [2024-07-15 16:21:35.713167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.311 [2024-07-15 16:21:35.808645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.571 [2024-07-15 16:21:35.863647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.571 [2024-07-15 16:21:35.899896] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:50.571 [2024-07-15 16:21:35.899978] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:50.571 [2024-07-15 16:21:35.900011] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.571 [2024-07-15 16:21:36.019950] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:50.571 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:50.829 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.829 [2024-07-15 16:21:36.178244] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:50.829 [2024-07-15 16:21:36.178736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63060 ] 00:06:50.829 [2024-07-15 16:21:36.317769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.089 [2024-07-15 16:21:36.408999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.089 [2024-07-15 16:21:36.466246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.348  Copying: 512/512 [B] (average 500 kBps) 00:06:51.348 00:06:51.349 ************************************ 00:06:51.349 END TEST dd_flag_nofollow 00:06:51.349 ************************************ 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ os0x19gtqb7v6h7rwau94rsfy0haapn5rymh65tpe18vx9g4jh4grq7c1b46g1zv31uhthtwl7lbkx3z2dg87jfgw70y2g2mxyotv04u6al8bahp0i3hcmddsqpa228elsza4mbvzxxfvqm45f81djg87q7u333y3tj3unewjwmascmk8yngjrgp8xyh0bywaw9cfofss93at9sfuex7p0tu2s1ee7o8r87roeqmn13uobod7m7v8bkyxsa0hhqavov9j54s4zsu6th8m3jymu0q6zwkr7g87dj6tex81e40m2rpfzp52fc3fuk3w3x347h7hkvp0lmtrtghbuawnz0dkviyag8hrkt89ohfrqf9cj64cxpl6gur53nza55dlbvk5xiqqsn1x83bp1cmfwud5gv7vy446qkmtwrvci7jzjco91mvd1ul5fw6jf3qykkdhr3ugjc8d61rpva2nyupqxbiy9hv22yxkgcc58f6l9gx7ajjoxlnous31wpd == \o\s\0\x\1\9\g\t\q\b\7\v\6\h\7\r\w\a\u\9\4\r\s\f\y\0\h\a\a\p\n\5\r\y\m\h\6\5\t\p\e\1\8\v\x\9\g\4\j\h\4\g\r\q\7\c\1\b\4\6\g\1\z\v\3\1\u\h\t\h\t\w\l\7\l\b\k\x\3\z\2\d\g\8\7\j\f\g\w\7\0\y\2\g\2\m\x\y\o\t\v\0\4\u\6\a\l\8\b\a\h\p\0\i\3\h\c\m\d\d\s\q\p\a\2\2\8\e\l\s\z\a\4\m\b\v\z\x\x\f\v\q\m\4\5\f\8\1\d\j\g\8\7\q\7\u\3\3\3\y\3\t\j\3\u\n\e\w\j\w\m\a\s\c\m\k\8\y\n\g\j\r\g\p\8\x\y\h\0\b\y\w\a\w\9\c\f\o\f\s\s\9\3\a\t\9\s\f\u\e\x\7\p\0\t\u\2\s\1\e\e\7\o\8\r\8\7\r\o\e\q\m\n\1\3\u\o\b\o\d\7\m\7\v\8\b\k\y\x\s\a\0\h\h\q\a\v\o\v\9\j\5\4\s\4\z\s\u\6\t\h\8\m\3\j\y\m\u\0\q\6\z\w\k\r\7\g\8\7\d\j\6\t\e\x\8\1\e\4\0\m\2\r\p\f\z\p\5\2\f\c\3\f\u\k\3\w\3\x\3\4\7\h\7\h\k\v\p\0\l\m\t\r\t\g\h\b\u\a\w\n\z\0\d\k\v\i\y\a\g\8\h\r\k\t\8\9\o\h\f\r\q\f\9\c\j\6\4\c\x\p\l\6\g\u\r\5\3\n\z\a\5\5\d\l\b\v\k\5\x\i\q\q\s\n\1\x\8\3\b\p\1\c\m\f\w\u\d\5\g\v\7\v\y\4\4\6\q\k\m\t\w\r\v\c\i\7\j\z\j\c\o\9\1\m\v\d\1\u\l\5\f\w\6\j\f\3\q\y\k\k\d\h\r\3\u\g\j\c\8\d\6\1\r\p\v\a\2\n\y\u\p\q\x\b\i\y\9\h\v\2\2\y\x\k\g\c\c\5\8\f\6\l\9\g\x\7\a\j\j\o\x\l\n\o\u\s\3\1\w\p\d ]] 00:06:51.349 00:06:51.349 real 0m1.778s 00:06:51.349 user 0m1.005s 00:06:51.349 sys 0m0.578s 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.349 ************************************ 00:06:51.349 START TEST dd_flag_noatime 00:06:51.349 ************************************ 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721060496 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721060496 00:06:51.349 16:21:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:52.285 16:21:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.543 [2024-07-15 16:21:37.847086] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:52.543 [2024-07-15 16:21:37.847180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63102 ] 00:06:52.543 [2024-07-15 16:21:37.988260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.801 [2024-07-15 16:21:38.121619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.802 [2024-07-15 16:21:38.178855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.060  Copying: 512/512 [B] (average 500 kBps) 00:06:53.060 00:06:53.060 16:21:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:53.060 16:21:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721060496 )) 00:06:53.060 16:21:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.060 16:21:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721060496 )) 00:06:53.060 16:21:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.060 [2024-07-15 16:21:38.500992] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:53.060 [2024-07-15 16:21:38.501100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63116 ] 00:06:53.318 [2024-07-15 16:21:38.639053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.318 [2024-07-15 16:21:38.752950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.318 [2024-07-15 16:21:38.809400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.575  Copying: 512/512 [B] (average 500 kBps) 00:06:53.575 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:53.575 ************************************ 00:06:53.575 END TEST dd_flag_noatime 00:06:53.575 ************************************ 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721060498 )) 00:06:53.575 00:06:53.575 real 0m2.290s 00:06:53.575 user 0m0.743s 00:06:53.575 sys 0m0.606s 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:53.575 ************************************ 00:06:53.575 START TEST dd_flags_misc 00:06:53.575 ************************************ 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.575 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:53.833 [2024-07-15 16:21:39.175717] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:53.833 [2024-07-15 16:21:39.176124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:06:53.833 [2024-07-15 16:21:39.317587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.091 [2024-07-15 16:21:39.424792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.091 [2024-07-15 16:21:39.481775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.349  Copying: 512/512 [B] (average 500 kBps) 00:06:54.349 00:06:54.349 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ au5uqzea93ql788af6vjylbop1tscj824glgxh9y8g50k9n6j3n7hhen9sxymgpzupcecrwgc86dd3xsdk3bn4rvtuzsn30w4up8i06epcz46db45cebytjdc3xe4miwpjvcg66zvw6i0n37mdmn0qc5i1rb9nnipvz5nsz3y214h5f6mrg57bl0ohclvp3nmw1qx5ia0oju1hfxy7pwm4mfhime5yubqos9e0meenvr2dtcx0o0t6q4nfouk5f4aomg3yhukugenm1bxaax0itkgd2iasnuxuxmasel89hgei99m0qte7czapzjyzo6z5ke394q8caoexa9pwkjpmuflyrdia710kouumjm11psqdndvymmgtp5fz0t9f2th1duf8sgkaacundg7uv2we7swu8ummxubn1q2xyjru14v9z6i5xhqefxmymfh7yzclzuor3c5robajxqjo0fky4wgrfyhe3019wlkvo57ejwtusum4d3odwttgjtdrm2 == \a\u\5\u\q\z\e\a\9\3\q\l\7\8\8\a\f\6\v\j\y\l\b\o\p\1\t\s\c\j\8\2\4\g\l\g\x\h\9\y\8\g\5\0\k\9\n\6\j\3\n\7\h\h\e\n\9\s\x\y\m\g\p\z\u\p\c\e\c\r\w\g\c\8\6\d\d\3\x\s\d\k\3\b\n\4\r\v\t\u\z\s\n\3\0\w\4\u\p\8\i\0\6\e\p\c\z\4\6\d\b\4\5\c\e\b\y\t\j\d\c\3\x\e\4\m\i\w\p\j\v\c\g\6\6\z\v\w\6\i\0\n\3\7\m\d\m\n\0\q\c\5\i\1\r\b\9\n\n\i\p\v\z\5\n\s\z\3\y\2\1\4\h\5\f\6\m\r\g\5\7\b\l\0\o\h\c\l\v\p\3\n\m\w\1\q\x\5\i\a\0\o\j\u\1\h\f\x\y\7\p\w\m\4\m\f\h\i\m\e\5\y\u\b\q\o\s\9\e\0\m\e\e\n\v\r\2\d\t\c\x\0\o\0\t\6\q\4\n\f\o\u\k\5\f\4\a\o\m\g\3\y\h\u\k\u\g\e\n\m\1\b\x\a\a\x\0\i\t\k\g\d\2\i\a\s\n\u\x\u\x\m\a\s\e\l\8\9\h\g\e\i\9\9\m\0\q\t\e\7\c\z\a\p\z\j\y\z\o\6\z\5\k\e\3\9\4\q\8\c\a\o\e\x\a\9\p\w\k\j\p\m\u\f\l\y\r\d\i\a\7\1\0\k\o\u\u\m\j\m\1\1\p\s\q\d\n\d\v\y\m\m\g\t\p\5\f\z\0\t\9\f\2\t\h\1\d\u\f\8\s\g\k\a\a\c\u\n\d\g\7\u\v\2\w\e\7\s\w\u\8\u\m\m\x\u\b\n\1\q\2\x\y\j\r\u\1\4\v\9\z\6\i\5\x\h\q\e\f\x\m\y\m\f\h\7\y\z\c\l\z\u\o\r\3\c\5\r\o\b\a\j\x\q\j\o\0\f\k\y\4\w\g\r\f\y\h\e\3\0\1\9\w\l\k\v\o\5\7\e\j\w\t\u\s\u\m\4\d\3\o\d\w\t\t\g\j\t\d\r\m\2 ]] 00:06:54.349 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.350 16:21:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:54.350 [2024-07-15 16:21:39.774138] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:54.350 [2024-07-15 16:21:39.774230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63158 ] 00:06:54.609 [2024-07-15 16:21:39.905019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.609 [2024-07-15 16:21:40.020528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.609 [2024-07-15 16:21:40.078892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.868  Copying: 512/512 [B] (average 500 kBps) 00:06:54.868 00:06:54.868 16:21:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ au5uqzea93ql788af6vjylbop1tscj824glgxh9y8g50k9n6j3n7hhen9sxymgpzupcecrwgc86dd3xsdk3bn4rvtuzsn30w4up8i06epcz46db45cebytjdc3xe4miwpjvcg66zvw6i0n37mdmn0qc5i1rb9nnipvz5nsz3y214h5f6mrg57bl0ohclvp3nmw1qx5ia0oju1hfxy7pwm4mfhime5yubqos9e0meenvr2dtcx0o0t6q4nfouk5f4aomg3yhukugenm1bxaax0itkgd2iasnuxuxmasel89hgei99m0qte7czapzjyzo6z5ke394q8caoexa9pwkjpmuflyrdia710kouumjm11psqdndvymmgtp5fz0t9f2th1duf8sgkaacundg7uv2we7swu8ummxubn1q2xyjru14v9z6i5xhqefxmymfh7yzclzuor3c5robajxqjo0fky4wgrfyhe3019wlkvo57ejwtusum4d3odwttgjtdrm2 == \a\u\5\u\q\z\e\a\9\3\q\l\7\8\8\a\f\6\v\j\y\l\b\o\p\1\t\s\c\j\8\2\4\g\l\g\x\h\9\y\8\g\5\0\k\9\n\6\j\3\n\7\h\h\e\n\9\s\x\y\m\g\p\z\u\p\c\e\c\r\w\g\c\8\6\d\d\3\x\s\d\k\3\b\n\4\r\v\t\u\z\s\n\3\0\w\4\u\p\8\i\0\6\e\p\c\z\4\6\d\b\4\5\c\e\b\y\t\j\d\c\3\x\e\4\m\i\w\p\j\v\c\g\6\6\z\v\w\6\i\0\n\3\7\m\d\m\n\0\q\c\5\i\1\r\b\9\n\n\i\p\v\z\5\n\s\z\3\y\2\1\4\h\5\f\6\m\r\g\5\7\b\l\0\o\h\c\l\v\p\3\n\m\w\1\q\x\5\i\a\0\o\j\u\1\h\f\x\y\7\p\w\m\4\m\f\h\i\m\e\5\y\u\b\q\o\s\9\e\0\m\e\e\n\v\r\2\d\t\c\x\0\o\0\t\6\q\4\n\f\o\u\k\5\f\4\a\o\m\g\3\y\h\u\k\u\g\e\n\m\1\b\x\a\a\x\0\i\t\k\g\d\2\i\a\s\n\u\x\u\x\m\a\s\e\l\8\9\h\g\e\i\9\9\m\0\q\t\e\7\c\z\a\p\z\j\y\z\o\6\z\5\k\e\3\9\4\q\8\c\a\o\e\x\a\9\p\w\k\j\p\m\u\f\l\y\r\d\i\a\7\1\0\k\o\u\u\m\j\m\1\1\p\s\q\d\n\d\v\y\m\m\g\t\p\5\f\z\0\t\9\f\2\t\h\1\d\u\f\8\s\g\k\a\a\c\u\n\d\g\7\u\v\2\w\e\7\s\w\u\8\u\m\m\x\u\b\n\1\q\2\x\y\j\r\u\1\4\v\9\z\6\i\5\x\h\q\e\f\x\m\y\m\f\h\7\y\z\c\l\z\u\o\r\3\c\5\r\o\b\a\j\x\q\j\o\0\f\k\y\4\w\g\r\f\y\h\e\3\0\1\9\w\l\k\v\o\5\7\e\j\w\t\u\s\u\m\4\d\3\o\d\w\t\t\g\j\t\d\r\m\2 ]] 00:06:54.868 16:21:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.868 16:21:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:54.868 [2024-07-15 16:21:40.377607] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:54.868 [2024-07-15 16:21:40.377710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63169 ] 00:06:55.127 [2024-07-15 16:21:40.515590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.127 [2024-07-15 16:21:40.624536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.385 [2024-07-15 16:21:40.682303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.385  Copying: 512/512 [B] (average 166 kBps) 00:06:55.385 00:06:55.643 16:21:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ au5uqzea93ql788af6vjylbop1tscj824glgxh9y8g50k9n6j3n7hhen9sxymgpzupcecrwgc86dd3xsdk3bn4rvtuzsn30w4up8i06epcz46db45cebytjdc3xe4miwpjvcg66zvw6i0n37mdmn0qc5i1rb9nnipvz5nsz3y214h5f6mrg57bl0ohclvp3nmw1qx5ia0oju1hfxy7pwm4mfhime5yubqos9e0meenvr2dtcx0o0t6q4nfouk5f4aomg3yhukugenm1bxaax0itkgd2iasnuxuxmasel89hgei99m0qte7czapzjyzo6z5ke394q8caoexa9pwkjpmuflyrdia710kouumjm11psqdndvymmgtp5fz0t9f2th1duf8sgkaacundg7uv2we7swu8ummxubn1q2xyjru14v9z6i5xhqefxmymfh7yzclzuor3c5robajxqjo0fky4wgrfyhe3019wlkvo57ejwtusum4d3odwttgjtdrm2 == \a\u\5\u\q\z\e\a\9\3\q\l\7\8\8\a\f\6\v\j\y\l\b\o\p\1\t\s\c\j\8\2\4\g\l\g\x\h\9\y\8\g\5\0\k\9\n\6\j\3\n\7\h\h\e\n\9\s\x\y\m\g\p\z\u\p\c\e\c\r\w\g\c\8\6\d\d\3\x\s\d\k\3\b\n\4\r\v\t\u\z\s\n\3\0\w\4\u\p\8\i\0\6\e\p\c\z\4\6\d\b\4\5\c\e\b\y\t\j\d\c\3\x\e\4\m\i\w\p\j\v\c\g\6\6\z\v\w\6\i\0\n\3\7\m\d\m\n\0\q\c\5\i\1\r\b\9\n\n\i\p\v\z\5\n\s\z\3\y\2\1\4\h\5\f\6\m\r\g\5\7\b\l\0\o\h\c\l\v\p\3\n\m\w\1\q\x\5\i\a\0\o\j\u\1\h\f\x\y\7\p\w\m\4\m\f\h\i\m\e\5\y\u\b\q\o\s\9\e\0\m\e\e\n\v\r\2\d\t\c\x\0\o\0\t\6\q\4\n\f\o\u\k\5\f\4\a\o\m\g\3\y\h\u\k\u\g\e\n\m\1\b\x\a\a\x\0\i\t\k\g\d\2\i\a\s\n\u\x\u\x\m\a\s\e\l\8\9\h\g\e\i\9\9\m\0\q\t\e\7\c\z\a\p\z\j\y\z\o\6\z\5\k\e\3\9\4\q\8\c\a\o\e\x\a\9\p\w\k\j\p\m\u\f\l\y\r\d\i\a\7\1\0\k\o\u\u\m\j\m\1\1\p\s\q\d\n\d\v\y\m\m\g\t\p\5\f\z\0\t\9\f\2\t\h\1\d\u\f\8\s\g\k\a\a\c\u\n\d\g\7\u\v\2\w\e\7\s\w\u\8\u\m\m\x\u\b\n\1\q\2\x\y\j\r\u\1\4\v\9\z\6\i\5\x\h\q\e\f\x\m\y\m\f\h\7\y\z\c\l\z\u\o\r\3\c\5\r\o\b\a\j\x\q\j\o\0\f\k\y\4\w\g\r\f\y\h\e\3\0\1\9\w\l\k\v\o\5\7\e\j\w\t\u\s\u\m\4\d\3\o\d\w\t\t\g\j\t\d\r\m\2 ]] 00:06:55.643 16:21:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.643 16:21:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:55.643 [2024-07-15 16:21:40.990378] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:55.643 [2024-07-15 16:21:40.990493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63178 ] 00:06:55.643 [2024-07-15 16:21:41.123429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.901 [2024-07-15 16:21:41.234977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.901 [2024-07-15 16:21:41.289817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.181  Copying: 512/512 [B] (average 250 kBps) 00:06:56.181 00:06:56.181 16:21:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ au5uqzea93ql788af6vjylbop1tscj824glgxh9y8g50k9n6j3n7hhen9sxymgpzupcecrwgc86dd3xsdk3bn4rvtuzsn30w4up8i06epcz46db45cebytjdc3xe4miwpjvcg66zvw6i0n37mdmn0qc5i1rb9nnipvz5nsz3y214h5f6mrg57bl0ohclvp3nmw1qx5ia0oju1hfxy7pwm4mfhime5yubqos9e0meenvr2dtcx0o0t6q4nfouk5f4aomg3yhukugenm1bxaax0itkgd2iasnuxuxmasel89hgei99m0qte7czapzjyzo6z5ke394q8caoexa9pwkjpmuflyrdia710kouumjm11psqdndvymmgtp5fz0t9f2th1duf8sgkaacundg7uv2we7swu8ummxubn1q2xyjru14v9z6i5xhqefxmymfh7yzclzuor3c5robajxqjo0fky4wgrfyhe3019wlkvo57ejwtusum4d3odwttgjtdrm2 == \a\u\5\u\q\z\e\a\9\3\q\l\7\8\8\a\f\6\v\j\y\l\b\o\p\1\t\s\c\j\8\2\4\g\l\g\x\h\9\y\8\g\5\0\k\9\n\6\j\3\n\7\h\h\e\n\9\s\x\y\m\g\p\z\u\p\c\e\c\r\w\g\c\8\6\d\d\3\x\s\d\k\3\b\n\4\r\v\t\u\z\s\n\3\0\w\4\u\p\8\i\0\6\e\p\c\z\4\6\d\b\4\5\c\e\b\y\t\j\d\c\3\x\e\4\m\i\w\p\j\v\c\g\6\6\z\v\w\6\i\0\n\3\7\m\d\m\n\0\q\c\5\i\1\r\b\9\n\n\i\p\v\z\5\n\s\z\3\y\2\1\4\h\5\f\6\m\r\g\5\7\b\l\0\o\h\c\l\v\p\3\n\m\w\1\q\x\5\i\a\0\o\j\u\1\h\f\x\y\7\p\w\m\4\m\f\h\i\m\e\5\y\u\b\q\o\s\9\e\0\m\e\e\n\v\r\2\d\t\c\x\0\o\0\t\6\q\4\n\f\o\u\k\5\f\4\a\o\m\g\3\y\h\u\k\u\g\e\n\m\1\b\x\a\a\x\0\i\t\k\g\d\2\i\a\s\n\u\x\u\x\m\a\s\e\l\8\9\h\g\e\i\9\9\m\0\q\t\e\7\c\z\a\p\z\j\y\z\o\6\z\5\k\e\3\9\4\q\8\c\a\o\e\x\a\9\p\w\k\j\p\m\u\f\l\y\r\d\i\a\7\1\0\k\o\u\u\m\j\m\1\1\p\s\q\d\n\d\v\y\m\m\g\t\p\5\f\z\0\t\9\f\2\t\h\1\d\u\f\8\s\g\k\a\a\c\u\n\d\g\7\u\v\2\w\e\7\s\w\u\8\u\m\m\x\u\b\n\1\q\2\x\y\j\r\u\1\4\v\9\z\6\i\5\x\h\q\e\f\x\m\y\m\f\h\7\y\z\c\l\z\u\o\r\3\c\5\r\o\b\a\j\x\q\j\o\0\f\k\y\4\w\g\r\f\y\h\e\3\0\1\9\w\l\k\v\o\5\7\e\j\w\t\u\s\u\m\4\d\3\o\d\w\t\t\g\j\t\d\r\m\2 ]] 00:06:56.181 16:21:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:56.181 16:21:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:56.181 16:21:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:56.181 16:21:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:56.181 16:21:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.181 16:21:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:56.181 [2024-07-15 16:21:41.597155] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:56.181 [2024-07-15 16:21:41.597247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63188 ] 00:06:56.444 [2024-07-15 16:21:41.731589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.444 [2024-07-15 16:21:41.838897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.444 [2024-07-15 16:21:41.896031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.701  Copying: 512/512 [B] (average 500 kBps) 00:06:56.701 00:06:56.701 16:21:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9t4xdqw6kc72xch75xgq27k8b4k7s05ax7q0zpnl00hcrspjmttb2ivx03wpl6b552re3fxt3xaww8t0ifwu5wurjwbqdi8g276oziw3k8s6lqx9crsp9vpch2qui2b8af6yy1odt3wq5x9hft9p1rir2xh7j4ytnpamwxeo8g7yo9qp3c2k2tlu1qvex9sj2kjdn27o05uaxs18wbplqe4u3r9drk135ic6ef244q203lc2rbqgx0tbjrh1e7sb2nshj77wqx8n1sgneczn3a5fq50hs0mligyrzi6cfev0xa0yc9kbhnvs9geu55sbgx2jvonuahrfiyliupepldw0xclk2jafoamnzdw3938u10t2mvttjl1fpirult31r6wpzh3pchexvj3r01xv1vcm3ftzs0lekjx75jn5zscelv8m6mx6oqnox08uve2ryui0jdi0zyqmswxyvi6j7qc819f3qyfgrxmw0pkl7syy3aulbp7g9agyoiaxt5by == \9\t\4\x\d\q\w\6\k\c\7\2\x\c\h\7\5\x\g\q\2\7\k\8\b\4\k\7\s\0\5\a\x\7\q\0\z\p\n\l\0\0\h\c\r\s\p\j\m\t\t\b\2\i\v\x\0\3\w\p\l\6\b\5\5\2\r\e\3\f\x\t\3\x\a\w\w\8\t\0\i\f\w\u\5\w\u\r\j\w\b\q\d\i\8\g\2\7\6\o\z\i\w\3\k\8\s\6\l\q\x\9\c\r\s\p\9\v\p\c\h\2\q\u\i\2\b\8\a\f\6\y\y\1\o\d\t\3\w\q\5\x\9\h\f\t\9\p\1\r\i\r\2\x\h\7\j\4\y\t\n\p\a\m\w\x\e\o\8\g\7\y\o\9\q\p\3\c\2\k\2\t\l\u\1\q\v\e\x\9\s\j\2\k\j\d\n\2\7\o\0\5\u\a\x\s\1\8\w\b\p\l\q\e\4\u\3\r\9\d\r\k\1\3\5\i\c\6\e\f\2\4\4\q\2\0\3\l\c\2\r\b\q\g\x\0\t\b\j\r\h\1\e\7\s\b\2\n\s\h\j\7\7\w\q\x\8\n\1\s\g\n\e\c\z\n\3\a\5\f\q\5\0\h\s\0\m\l\i\g\y\r\z\i\6\c\f\e\v\0\x\a\0\y\c\9\k\b\h\n\v\s\9\g\e\u\5\5\s\b\g\x\2\j\v\o\n\u\a\h\r\f\i\y\l\i\u\p\e\p\l\d\w\0\x\c\l\k\2\j\a\f\o\a\m\n\z\d\w\3\9\3\8\u\1\0\t\2\m\v\t\t\j\l\1\f\p\i\r\u\l\t\3\1\r\6\w\p\z\h\3\p\c\h\e\x\v\j\3\r\0\1\x\v\1\v\c\m\3\f\t\z\s\0\l\e\k\j\x\7\5\j\n\5\z\s\c\e\l\v\8\m\6\m\x\6\o\q\n\o\x\0\8\u\v\e\2\r\y\u\i\0\j\d\i\0\z\y\q\m\s\w\x\y\v\i\6\j\7\q\c\8\1\9\f\3\q\y\f\g\r\x\m\w\0\p\k\l\7\s\y\y\3\a\u\l\b\p\7\g\9\a\g\y\o\i\a\x\t\5\b\y ]] 00:06:56.701 16:21:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.701 16:21:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:56.701 [2024-07-15 16:21:42.186561] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:56.702 [2024-07-15 16:21:42.186641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63203 ] 00:06:57.001 [2024-07-15 16:21:42.317164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.001 [2024-07-15 16:21:42.416674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.001 [2024-07-15 16:21:42.473459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.258  Copying: 512/512 [B] (average 500 kBps) 00:06:57.258 00:06:57.258 16:21:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9t4xdqw6kc72xch75xgq27k8b4k7s05ax7q0zpnl00hcrspjmttb2ivx03wpl6b552re3fxt3xaww8t0ifwu5wurjwbqdi8g276oziw3k8s6lqx9crsp9vpch2qui2b8af6yy1odt3wq5x9hft9p1rir2xh7j4ytnpamwxeo8g7yo9qp3c2k2tlu1qvex9sj2kjdn27o05uaxs18wbplqe4u3r9drk135ic6ef244q203lc2rbqgx0tbjrh1e7sb2nshj77wqx8n1sgneczn3a5fq50hs0mligyrzi6cfev0xa0yc9kbhnvs9geu55sbgx2jvonuahrfiyliupepldw0xclk2jafoamnzdw3938u10t2mvttjl1fpirult31r6wpzh3pchexvj3r01xv1vcm3ftzs0lekjx75jn5zscelv8m6mx6oqnox08uve2ryui0jdi0zyqmswxyvi6j7qc819f3qyfgrxmw0pkl7syy3aulbp7g9agyoiaxt5by == \9\t\4\x\d\q\w\6\k\c\7\2\x\c\h\7\5\x\g\q\2\7\k\8\b\4\k\7\s\0\5\a\x\7\q\0\z\p\n\l\0\0\h\c\r\s\p\j\m\t\t\b\2\i\v\x\0\3\w\p\l\6\b\5\5\2\r\e\3\f\x\t\3\x\a\w\w\8\t\0\i\f\w\u\5\w\u\r\j\w\b\q\d\i\8\g\2\7\6\o\z\i\w\3\k\8\s\6\l\q\x\9\c\r\s\p\9\v\p\c\h\2\q\u\i\2\b\8\a\f\6\y\y\1\o\d\t\3\w\q\5\x\9\h\f\t\9\p\1\r\i\r\2\x\h\7\j\4\y\t\n\p\a\m\w\x\e\o\8\g\7\y\o\9\q\p\3\c\2\k\2\t\l\u\1\q\v\e\x\9\s\j\2\k\j\d\n\2\7\o\0\5\u\a\x\s\1\8\w\b\p\l\q\e\4\u\3\r\9\d\r\k\1\3\5\i\c\6\e\f\2\4\4\q\2\0\3\l\c\2\r\b\q\g\x\0\t\b\j\r\h\1\e\7\s\b\2\n\s\h\j\7\7\w\q\x\8\n\1\s\g\n\e\c\z\n\3\a\5\f\q\5\0\h\s\0\m\l\i\g\y\r\z\i\6\c\f\e\v\0\x\a\0\y\c\9\k\b\h\n\v\s\9\g\e\u\5\5\s\b\g\x\2\j\v\o\n\u\a\h\r\f\i\y\l\i\u\p\e\p\l\d\w\0\x\c\l\k\2\j\a\f\o\a\m\n\z\d\w\3\9\3\8\u\1\0\t\2\m\v\t\t\j\l\1\f\p\i\r\u\l\t\3\1\r\6\w\p\z\h\3\p\c\h\e\x\v\j\3\r\0\1\x\v\1\v\c\m\3\f\t\z\s\0\l\e\k\j\x\7\5\j\n\5\z\s\c\e\l\v\8\m\6\m\x\6\o\q\n\o\x\0\8\u\v\e\2\r\y\u\i\0\j\d\i\0\z\y\q\m\s\w\x\y\v\i\6\j\7\q\c\8\1\9\f\3\q\y\f\g\r\x\m\w\0\p\k\l\7\s\y\y\3\a\u\l\b\p\7\g\9\a\g\y\o\i\a\x\t\5\b\y ]] 00:06:57.258 16:21:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.259 16:21:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:57.259 [2024-07-15 16:21:42.772426] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:57.259 [2024-07-15 16:21:42.772526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63207 ] 00:06:57.515 [2024-07-15 16:21:42.906373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.515 [2024-07-15 16:21:43.022231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.773 [2024-07-15 16:21:43.079605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.031  Copying: 512/512 [B] (average 250 kBps) 00:06:58.031 00:06:58.031 16:21:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9t4xdqw6kc72xch75xgq27k8b4k7s05ax7q0zpnl00hcrspjmttb2ivx03wpl6b552re3fxt3xaww8t0ifwu5wurjwbqdi8g276oziw3k8s6lqx9crsp9vpch2qui2b8af6yy1odt3wq5x9hft9p1rir2xh7j4ytnpamwxeo8g7yo9qp3c2k2tlu1qvex9sj2kjdn27o05uaxs18wbplqe4u3r9drk135ic6ef244q203lc2rbqgx0tbjrh1e7sb2nshj77wqx8n1sgneczn3a5fq50hs0mligyrzi6cfev0xa0yc9kbhnvs9geu55sbgx2jvonuahrfiyliupepldw0xclk2jafoamnzdw3938u10t2mvttjl1fpirult31r6wpzh3pchexvj3r01xv1vcm3ftzs0lekjx75jn5zscelv8m6mx6oqnox08uve2ryui0jdi0zyqmswxyvi6j7qc819f3qyfgrxmw0pkl7syy3aulbp7g9agyoiaxt5by == \9\t\4\x\d\q\w\6\k\c\7\2\x\c\h\7\5\x\g\q\2\7\k\8\b\4\k\7\s\0\5\a\x\7\q\0\z\p\n\l\0\0\h\c\r\s\p\j\m\t\t\b\2\i\v\x\0\3\w\p\l\6\b\5\5\2\r\e\3\f\x\t\3\x\a\w\w\8\t\0\i\f\w\u\5\w\u\r\j\w\b\q\d\i\8\g\2\7\6\o\z\i\w\3\k\8\s\6\l\q\x\9\c\r\s\p\9\v\p\c\h\2\q\u\i\2\b\8\a\f\6\y\y\1\o\d\t\3\w\q\5\x\9\h\f\t\9\p\1\r\i\r\2\x\h\7\j\4\y\t\n\p\a\m\w\x\e\o\8\g\7\y\o\9\q\p\3\c\2\k\2\t\l\u\1\q\v\e\x\9\s\j\2\k\j\d\n\2\7\o\0\5\u\a\x\s\1\8\w\b\p\l\q\e\4\u\3\r\9\d\r\k\1\3\5\i\c\6\e\f\2\4\4\q\2\0\3\l\c\2\r\b\q\g\x\0\t\b\j\r\h\1\e\7\s\b\2\n\s\h\j\7\7\w\q\x\8\n\1\s\g\n\e\c\z\n\3\a\5\f\q\5\0\h\s\0\m\l\i\g\y\r\z\i\6\c\f\e\v\0\x\a\0\y\c\9\k\b\h\n\v\s\9\g\e\u\5\5\s\b\g\x\2\j\v\o\n\u\a\h\r\f\i\y\l\i\u\p\e\p\l\d\w\0\x\c\l\k\2\j\a\f\o\a\m\n\z\d\w\3\9\3\8\u\1\0\t\2\m\v\t\t\j\l\1\f\p\i\r\u\l\t\3\1\r\6\w\p\z\h\3\p\c\h\e\x\v\j\3\r\0\1\x\v\1\v\c\m\3\f\t\z\s\0\l\e\k\j\x\7\5\j\n\5\z\s\c\e\l\v\8\m\6\m\x\6\o\q\n\o\x\0\8\u\v\e\2\r\y\u\i\0\j\d\i\0\z\y\q\m\s\w\x\y\v\i\6\j\7\q\c\8\1\9\f\3\q\y\f\g\r\x\m\w\0\p\k\l\7\s\y\y\3\a\u\l\b\p\7\g\9\a\g\y\o\i\a\x\t\5\b\y ]] 00:06:58.031 16:21:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.031 16:21:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:58.031 [2024-07-15 16:21:43.381085] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:58.031 [2024-07-15 16:21:43.381174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63222 ] 00:06:58.031 [2024-07-15 16:21:43.517382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.288 [2024-07-15 16:21:43.617816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.288 [2024-07-15 16:21:43.672834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.547  Copying: 512/512 [B] (average 250 kBps) 00:06:58.547 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9t4xdqw6kc72xch75xgq27k8b4k7s05ax7q0zpnl00hcrspjmttb2ivx03wpl6b552re3fxt3xaww8t0ifwu5wurjwbqdi8g276oziw3k8s6lqx9crsp9vpch2qui2b8af6yy1odt3wq5x9hft9p1rir2xh7j4ytnpamwxeo8g7yo9qp3c2k2tlu1qvex9sj2kjdn27o05uaxs18wbplqe4u3r9drk135ic6ef244q203lc2rbqgx0tbjrh1e7sb2nshj77wqx8n1sgneczn3a5fq50hs0mligyrzi6cfev0xa0yc9kbhnvs9geu55sbgx2jvonuahrfiyliupepldw0xclk2jafoamnzdw3938u10t2mvttjl1fpirult31r6wpzh3pchexvj3r01xv1vcm3ftzs0lekjx75jn5zscelv8m6mx6oqnox08uve2ryui0jdi0zyqmswxyvi6j7qc819f3qyfgrxmw0pkl7syy3aulbp7g9agyoiaxt5by == \9\t\4\x\d\q\w\6\k\c\7\2\x\c\h\7\5\x\g\q\2\7\k\8\b\4\k\7\s\0\5\a\x\7\q\0\z\p\n\l\0\0\h\c\r\s\p\j\m\t\t\b\2\i\v\x\0\3\w\p\l\6\b\5\5\2\r\e\3\f\x\t\3\x\a\w\w\8\t\0\i\f\w\u\5\w\u\r\j\w\b\q\d\i\8\g\2\7\6\o\z\i\w\3\k\8\s\6\l\q\x\9\c\r\s\p\9\v\p\c\h\2\q\u\i\2\b\8\a\f\6\y\y\1\o\d\t\3\w\q\5\x\9\h\f\t\9\p\1\r\i\r\2\x\h\7\j\4\y\t\n\p\a\m\w\x\e\o\8\g\7\y\o\9\q\p\3\c\2\k\2\t\l\u\1\q\v\e\x\9\s\j\2\k\j\d\n\2\7\o\0\5\u\a\x\s\1\8\w\b\p\l\q\e\4\u\3\r\9\d\r\k\1\3\5\i\c\6\e\f\2\4\4\q\2\0\3\l\c\2\r\b\q\g\x\0\t\b\j\r\h\1\e\7\s\b\2\n\s\h\j\7\7\w\q\x\8\n\1\s\g\n\e\c\z\n\3\a\5\f\q\5\0\h\s\0\m\l\i\g\y\r\z\i\6\c\f\e\v\0\x\a\0\y\c\9\k\b\h\n\v\s\9\g\e\u\5\5\s\b\g\x\2\j\v\o\n\u\a\h\r\f\i\y\l\i\u\p\e\p\l\d\w\0\x\c\l\k\2\j\a\f\o\a\m\n\z\d\w\3\9\3\8\u\1\0\t\2\m\v\t\t\j\l\1\f\p\i\r\u\l\t\3\1\r\6\w\p\z\h\3\p\c\h\e\x\v\j\3\r\0\1\x\v\1\v\c\m\3\f\t\z\s\0\l\e\k\j\x\7\5\j\n\5\z\s\c\e\l\v\8\m\6\m\x\6\o\q\n\o\x\0\8\u\v\e\2\r\y\u\i\0\j\d\i\0\z\y\q\m\s\w\x\y\v\i\6\j\7\q\c\8\1\9\f\3\q\y\f\g\r\x\m\w\0\p\k\l\7\s\y\y\3\a\u\l\b\p\7\g\9\a\g\y\o\i\a\x\t\5\b\y ]] 00:06:58.547 00:06:58.547 real 0m4.802s 00:06:58.547 user 0m2.748s 00:06:58.547 sys 0m2.267s 00:06:58.547 ************************************ 00:06:58.547 END TEST dd_flags_misc 00:06:58.547 ************************************ 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:58.547 * Second test run, disabling liburing, forcing AIO 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:58.547 ************************************ 00:06:58.547 START TEST dd_flag_append_forced_aio 00:06:58.547 ************************************ 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=af9xv4jkq5lvbttdb92qiu5ofjkzmyet 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=qz2xxgvq39f1vn33nce1ftekcx879284 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s af9xv4jkq5lvbttdb92qiu5ofjkzmyet 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s qz2xxgvq39f1vn33nce1ftekcx879284 00:06:58.547 16:21:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:58.547 [2024-07-15 16:21:44.027921] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:58.547 [2024-07-15 16:21:44.028014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63245 ] 00:06:58.806 [2024-07-15 16:21:44.170681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.806 [2024-07-15 16:21:44.278408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.806 [2024-07-15 16:21:44.338261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.065  Copying: 32/32 [B] (average 31 kBps) 00:06:59.065 00:06:59.065 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ qz2xxgvq39f1vn33nce1ftekcx879284af9xv4jkq5lvbttdb92qiu5ofjkzmyet == \q\z\2\x\x\g\v\q\3\9\f\1\v\n\3\3\n\c\e\1\f\t\e\k\c\x\8\7\9\2\8\4\a\f\9\x\v\4\j\k\q\5\l\v\b\t\t\d\b\9\2\q\i\u\5\o\f\j\k\z\m\y\e\t ]] 00:06:59.065 00:06:59.065 real 0m0.641s 00:06:59.065 user 0m0.355s 00:06:59.065 sys 0m0.161s 00:06:59.065 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.065 ************************************ 00:06:59.065 END TEST dd_flag_append_forced_aio 00:06:59.065 ************************************ 00:06:59.065 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.324 ************************************ 00:06:59.324 START TEST dd_flag_directory_forced_aio 00:06:59.324 ************************************ 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.324 16:21:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.324 [2024-07-15 16:21:44.711791] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:59.324 [2024-07-15 16:21:44.711901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63277 ] 00:06:59.324 [2024-07-15 16:21:44.850383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.582 [2024-07-15 16:21:44.941520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.582 [2024-07-15 16:21:44.998245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.582 [2024-07-15 16:21:45.032187] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:59.582 [2024-07-15 16:21:45.032267] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:59.582 [2024-07-15 16:21:45.032283] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.840 [2024-07-15 16:21:45.148502] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:59.840 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.841 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:59.841 [2024-07-15 16:21:45.294376] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:06:59.841 [2024-07-15 16:21:45.294473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63292 ] 00:07:00.100 [2024-07-15 16:21:45.426628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.100 [2024-07-15 16:21:45.520540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.100 [2024-07-15 16:21:45.578055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.100 [2024-07-15 16:21:45.612002] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.100 [2024-07-15 16:21:45.612052] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.100 [2024-07-15 16:21:45.612083] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.359 [2024-07-15 16:21:45.727290] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.359 ************************************ 00:07:00.359 END TEST dd_flag_directory_forced_aio 00:07:00.359 ************************************ 00:07:00.359 00:07:00.359 real 0m1.197s 00:07:00.359 user 0m0.701s 00:07:00.359 sys 0m0.287s 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.359 ************************************ 00:07:00.359 START TEST dd_flag_nofollow_forced_aio 00:07:00.359 ************************************ 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:00.359 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.618 16:21:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.618 [2024-07-15 16:21:45.975485] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:00.618 [2024-07-15 16:21:45.975594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63315 ] 00:07:00.618 [2024-07-15 16:21:46.119377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.876 [2024-07-15 16:21:46.254431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.876 [2024-07-15 16:21:46.311844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.876 [2024-07-15 16:21:46.344567] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:00.876 [2024-07-15 16:21:46.344623] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:00.876 [2024-07-15 16:21:46.344654] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.134 [2024-07-15 16:21:46.467661] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.134 16:21:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.134 [2024-07-15 16:21:46.634833] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:01.134 [2024-07-15 16:21:46.634947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63330 ] 00:07:01.394 [2024-07-15 16:21:46.773497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.394 [2024-07-15 16:21:46.869156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.394 [2024-07-15 16:21:46.926438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.652 [2024-07-15 16:21:46.959296] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:01.652 [2024-07-15 16:21:46.959345] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:01.652 [2024-07-15 16:21:46.959360] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.652 [2024-07-15 16:21:47.072635] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:01.652 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.911 [2024-07-15 16:21:47.235654] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:01.911 [2024-07-15 16:21:47.235749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63333 ] 00:07:01.911 [2024-07-15 16:21:47.373996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.169 [2024-07-15 16:21:47.479793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.169 [2024-07-15 16:21:47.535096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.427  Copying: 512/512 [B] (average 500 kBps) 00:07:02.427 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ gswjwdcvqeudyxmeo2qna5zm1gabsbk12d7fbqvpxgnzz608bigl0ge2ig4kg1x1mucny07kz26e5m4yp0sf2ps9998td1jckqap1uqd82zy8bnzzg93zpqbubyi1pc73pfzabfa2dojrsi5180fghh2il8pwozwi7r5poi99f8lzhraoblnd5y7ddxxwgcbrmqxsy6l35whpmgiagak8gze4cd3055jpahrniulgiiipw4d5wsvv1rikdz1i8xu665o41uj2dy2rg958uqj67csz6ek4w35320r5qlgca6eqsrhw5e4leykca01udaevktwt0c1b0iabz2ak87m1cy1bbd2m0ec949u700ttn1d4oefp856acgbg8tmj5ywg7qc8nhxp4isqdz9hh5goyd72lc421i52uhh8gba8u8d7r8dozawon74smxuoplz1e2wg3yrgug2vqumzhvuojmgqlkhija5udgtwkb552quda8dmatqmphuk1ihedqi == \g\s\w\j\w\d\c\v\q\e\u\d\y\x\m\e\o\2\q\n\a\5\z\m\1\g\a\b\s\b\k\1\2\d\7\f\b\q\v\p\x\g\n\z\z\6\0\8\b\i\g\l\0\g\e\2\i\g\4\k\g\1\x\1\m\u\c\n\y\0\7\k\z\2\6\e\5\m\4\y\p\0\s\f\2\p\s\9\9\9\8\t\d\1\j\c\k\q\a\p\1\u\q\d\8\2\z\y\8\b\n\z\z\g\9\3\z\p\q\b\u\b\y\i\1\p\c\7\3\p\f\z\a\b\f\a\2\d\o\j\r\s\i\5\1\8\0\f\g\h\h\2\i\l\8\p\w\o\z\w\i\7\r\5\p\o\i\9\9\f\8\l\z\h\r\a\o\b\l\n\d\5\y\7\d\d\x\x\w\g\c\b\r\m\q\x\s\y\6\l\3\5\w\h\p\m\g\i\a\g\a\k\8\g\z\e\4\c\d\3\0\5\5\j\p\a\h\r\n\i\u\l\g\i\i\i\p\w\4\d\5\w\s\v\v\1\r\i\k\d\z\1\i\8\x\u\6\6\5\o\4\1\u\j\2\d\y\2\r\g\9\5\8\u\q\j\6\7\c\s\z\6\e\k\4\w\3\5\3\2\0\r\5\q\l\g\c\a\6\e\q\s\r\h\w\5\e\4\l\e\y\k\c\a\0\1\u\d\a\e\v\k\t\w\t\0\c\1\b\0\i\a\b\z\2\a\k\8\7\m\1\c\y\1\b\b\d\2\m\0\e\c\9\4\9\u\7\0\0\t\t\n\1\d\4\o\e\f\p\8\5\6\a\c\g\b\g\8\t\m\j\5\y\w\g\7\q\c\8\n\h\x\p\4\i\s\q\d\z\9\h\h\5\g\o\y\d\7\2\l\c\4\2\1\i\5\2\u\h\h\8\g\b\a\8\u\8\d\7\r\8\d\o\z\a\w\o\n\7\4\s\m\x\u\o\p\l\z\1\e\2\w\g\3\y\r\g\u\g\2\v\q\u\m\z\h\v\u\o\j\m\g\q\l\k\h\i\j\a\5\u\d\g\t\w\k\b\5\5\2\q\u\d\a\8\d\m\a\t\q\m\p\h\u\k\1\i\h\e\d\q\i ]] 00:07:02.427 00:07:02.427 real 0m1.901s 00:07:02.427 user 0m1.087s 00:07:02.427 sys 0m0.480s 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.427 ************************************ 00:07:02.427 END TEST dd_flag_nofollow_forced_aio 00:07:02.427 ************************************ 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 ************************************ 00:07:02.427 START TEST dd_flag_noatime_forced_aio 00:07:02.427 ************************************ 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721060507 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721060507 00:07:02.427 16:21:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:03.426 16:21:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.426 [2024-07-15 16:21:48.945223] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:03.426 [2024-07-15 16:21:48.945323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63378 ] 00:07:03.686 [2024-07-15 16:21:49.090583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.686 [2024-07-15 16:21:49.205362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.945 [2024-07-15 16:21:49.264246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.203  Copying: 512/512 [B] (average 500 kBps) 00:07:04.204 00:07:04.204 16:21:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.204 16:21:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721060507 )) 00:07:04.204 16:21:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.204 16:21:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721060507 )) 00:07:04.204 16:21:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.204 [2024-07-15 16:21:49.598672] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:04.204 [2024-07-15 16:21:49.598764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63395 ] 00:07:04.204 [2024-07-15 16:21:49.736827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.462 [2024-07-15 16:21:49.826427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.462 [2024-07-15 16:21:49.881515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.721  Copying: 512/512 [B] (average 500 kBps) 00:07:04.721 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721060509 )) 00:07:04.721 00:07:04.721 real 0m2.298s 00:07:04.721 user 0m0.710s 00:07:04.721 sys 0m0.349s 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.721 ************************************ 00:07:04.721 END TEST dd_flag_noatime_forced_aio 00:07:04.721 ************************************ 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.721 ************************************ 00:07:04.721 START TEST dd_flags_misc_forced_aio 00:07:04.721 ************************************ 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.721 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:04.979 [2024-07-15 16:21:50.271598] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:04.979 [2024-07-15 16:21:50.272001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63416 ] 00:07:04.979 [2024-07-15 16:21:50.414261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.237 [2024-07-15 16:21:50.533281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.238 [2024-07-15 16:21:50.591983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.496  Copying: 512/512 [B] (average 500 kBps) 00:07:05.496 00:07:05.496 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qu7h1loz7fi9o193ayquxjqch3yr3w06bpfqg7brpkgpnijqtzp4krzrjpl44hhskcj9f9auqyi3xfn8evf932n969s4sfg9gsv5e9rkin53xm9douq8vplb9ydbsb963hw410us8ink1myyphro37hveybuikinzv5l9ytqf3ztnxwre4ino22r8plizo0dci951w5fmok4dg8ah4dz10oa03glwmtmboafji3ua9y51l4eeft3gumba3sjl2g0b2ngx8jfyzejqviot1xibx1801qtvwaryq2samb2v1jxkpwq0500ztcbovrhyagynk3lh61z51c7kmq73jfj4wo6s2q28af9p9m45cluauxm49gorotku19uryukuvkvux9b6viul19yep7zxjqt9iuqwo68u2trb95bdl6hn6eyuc9djmi40f0yfion6ndp2xlpivoqe5zcb3qp89gw4cbgh5vn98hptev1t76vlz8m75k14d22pfkmhpgw7b1z == \q\u\7\h\1\l\o\z\7\f\i\9\o\1\9\3\a\y\q\u\x\j\q\c\h\3\y\r\3\w\0\6\b\p\f\q\g\7\b\r\p\k\g\p\n\i\j\q\t\z\p\4\k\r\z\r\j\p\l\4\4\h\h\s\k\c\j\9\f\9\a\u\q\y\i\3\x\f\n\8\e\v\f\9\3\2\n\9\6\9\s\4\s\f\g\9\g\s\v\5\e\9\r\k\i\n\5\3\x\m\9\d\o\u\q\8\v\p\l\b\9\y\d\b\s\b\9\6\3\h\w\4\1\0\u\s\8\i\n\k\1\m\y\y\p\h\r\o\3\7\h\v\e\y\b\u\i\k\i\n\z\v\5\l\9\y\t\q\f\3\z\t\n\x\w\r\e\4\i\n\o\2\2\r\8\p\l\i\z\o\0\d\c\i\9\5\1\w\5\f\m\o\k\4\d\g\8\a\h\4\d\z\1\0\o\a\0\3\g\l\w\m\t\m\b\o\a\f\j\i\3\u\a\9\y\5\1\l\4\e\e\f\t\3\g\u\m\b\a\3\s\j\l\2\g\0\b\2\n\g\x\8\j\f\y\z\e\j\q\v\i\o\t\1\x\i\b\x\1\8\0\1\q\t\v\w\a\r\y\q\2\s\a\m\b\2\v\1\j\x\k\p\w\q\0\5\0\0\z\t\c\b\o\v\r\h\y\a\g\y\n\k\3\l\h\6\1\z\5\1\c\7\k\m\q\7\3\j\f\j\4\w\o\6\s\2\q\2\8\a\f\9\p\9\m\4\5\c\l\u\a\u\x\m\4\9\g\o\r\o\t\k\u\1\9\u\r\y\u\k\u\v\k\v\u\x\9\b\6\v\i\u\l\1\9\y\e\p\7\z\x\j\q\t\9\i\u\q\w\o\6\8\u\2\t\r\b\9\5\b\d\l\6\h\n\6\e\y\u\c\9\d\j\m\i\4\0\f\0\y\f\i\o\n\6\n\d\p\2\x\l\p\i\v\o\q\e\5\z\c\b\3\q\p\8\9\g\w\4\c\b\g\h\5\v\n\9\8\h\p\t\e\v\1\t\7\6\v\l\z\8\m\7\5\k\1\4\d\2\2\p\f\k\m\h\p\g\w\7\b\1\z ]] 00:07:05.496 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.496 16:21:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:05.496 [2024-07-15 16:21:50.917767] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:05.496 [2024-07-15 16:21:50.917888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63429 ] 00:07:05.755 [2024-07-15 16:21:51.052977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.755 [2024-07-15 16:21:51.147375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.755 [2024-07-15 16:21:51.202173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.014  Copying: 512/512 [B] (average 500 kBps) 00:07:06.014 00:07:06.014 16:21:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qu7h1loz7fi9o193ayquxjqch3yr3w06bpfqg7brpkgpnijqtzp4krzrjpl44hhskcj9f9auqyi3xfn8evf932n969s4sfg9gsv5e9rkin53xm9douq8vplb9ydbsb963hw410us8ink1myyphro37hveybuikinzv5l9ytqf3ztnxwre4ino22r8plizo0dci951w5fmok4dg8ah4dz10oa03glwmtmboafji3ua9y51l4eeft3gumba3sjl2g0b2ngx8jfyzejqviot1xibx1801qtvwaryq2samb2v1jxkpwq0500ztcbovrhyagynk3lh61z51c7kmq73jfj4wo6s2q28af9p9m45cluauxm49gorotku19uryukuvkvux9b6viul19yep7zxjqt9iuqwo68u2trb95bdl6hn6eyuc9djmi40f0yfion6ndp2xlpivoqe5zcb3qp89gw4cbgh5vn98hptev1t76vlz8m75k14d22pfkmhpgw7b1z == \q\u\7\h\1\l\o\z\7\f\i\9\o\1\9\3\a\y\q\u\x\j\q\c\h\3\y\r\3\w\0\6\b\p\f\q\g\7\b\r\p\k\g\p\n\i\j\q\t\z\p\4\k\r\z\r\j\p\l\4\4\h\h\s\k\c\j\9\f\9\a\u\q\y\i\3\x\f\n\8\e\v\f\9\3\2\n\9\6\9\s\4\s\f\g\9\g\s\v\5\e\9\r\k\i\n\5\3\x\m\9\d\o\u\q\8\v\p\l\b\9\y\d\b\s\b\9\6\3\h\w\4\1\0\u\s\8\i\n\k\1\m\y\y\p\h\r\o\3\7\h\v\e\y\b\u\i\k\i\n\z\v\5\l\9\y\t\q\f\3\z\t\n\x\w\r\e\4\i\n\o\2\2\r\8\p\l\i\z\o\0\d\c\i\9\5\1\w\5\f\m\o\k\4\d\g\8\a\h\4\d\z\1\0\o\a\0\3\g\l\w\m\t\m\b\o\a\f\j\i\3\u\a\9\y\5\1\l\4\e\e\f\t\3\g\u\m\b\a\3\s\j\l\2\g\0\b\2\n\g\x\8\j\f\y\z\e\j\q\v\i\o\t\1\x\i\b\x\1\8\0\1\q\t\v\w\a\r\y\q\2\s\a\m\b\2\v\1\j\x\k\p\w\q\0\5\0\0\z\t\c\b\o\v\r\h\y\a\g\y\n\k\3\l\h\6\1\z\5\1\c\7\k\m\q\7\3\j\f\j\4\w\o\6\s\2\q\2\8\a\f\9\p\9\m\4\5\c\l\u\a\u\x\m\4\9\g\o\r\o\t\k\u\1\9\u\r\y\u\k\u\v\k\v\u\x\9\b\6\v\i\u\l\1\9\y\e\p\7\z\x\j\q\t\9\i\u\q\w\o\6\8\u\2\t\r\b\9\5\b\d\l\6\h\n\6\e\y\u\c\9\d\j\m\i\4\0\f\0\y\f\i\o\n\6\n\d\p\2\x\l\p\i\v\o\q\e\5\z\c\b\3\q\p\8\9\g\w\4\c\b\g\h\5\v\n\9\8\h\p\t\e\v\1\t\7\6\v\l\z\8\m\7\5\k\1\4\d\2\2\p\f\k\m\h\p\g\w\7\b\1\z ]] 00:07:06.014 16:21:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.015 16:21:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:06.015 [2024-07-15 16:21:51.533482] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:06.015 [2024-07-15 16:21:51.533582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63442 ] 00:07:06.273 [2024-07-15 16:21:51.671438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.273 [2024-07-15 16:21:51.783370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.532 [2024-07-15 16:21:51.843487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.791  Copying: 512/512 [B] (average 125 kBps) 00:07:06.791 00:07:06.791 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qu7h1loz7fi9o193ayquxjqch3yr3w06bpfqg7brpkgpnijqtzp4krzrjpl44hhskcj9f9auqyi3xfn8evf932n969s4sfg9gsv5e9rkin53xm9douq8vplb9ydbsb963hw410us8ink1myyphro37hveybuikinzv5l9ytqf3ztnxwre4ino22r8plizo0dci951w5fmok4dg8ah4dz10oa03glwmtmboafji3ua9y51l4eeft3gumba3sjl2g0b2ngx8jfyzejqviot1xibx1801qtvwaryq2samb2v1jxkpwq0500ztcbovrhyagynk3lh61z51c7kmq73jfj4wo6s2q28af9p9m45cluauxm49gorotku19uryukuvkvux9b6viul19yep7zxjqt9iuqwo68u2trb95bdl6hn6eyuc9djmi40f0yfion6ndp2xlpivoqe5zcb3qp89gw4cbgh5vn98hptev1t76vlz8m75k14d22pfkmhpgw7b1z == \q\u\7\h\1\l\o\z\7\f\i\9\o\1\9\3\a\y\q\u\x\j\q\c\h\3\y\r\3\w\0\6\b\p\f\q\g\7\b\r\p\k\g\p\n\i\j\q\t\z\p\4\k\r\z\r\j\p\l\4\4\h\h\s\k\c\j\9\f\9\a\u\q\y\i\3\x\f\n\8\e\v\f\9\3\2\n\9\6\9\s\4\s\f\g\9\g\s\v\5\e\9\r\k\i\n\5\3\x\m\9\d\o\u\q\8\v\p\l\b\9\y\d\b\s\b\9\6\3\h\w\4\1\0\u\s\8\i\n\k\1\m\y\y\p\h\r\o\3\7\h\v\e\y\b\u\i\k\i\n\z\v\5\l\9\y\t\q\f\3\z\t\n\x\w\r\e\4\i\n\o\2\2\r\8\p\l\i\z\o\0\d\c\i\9\5\1\w\5\f\m\o\k\4\d\g\8\a\h\4\d\z\1\0\o\a\0\3\g\l\w\m\t\m\b\o\a\f\j\i\3\u\a\9\y\5\1\l\4\e\e\f\t\3\g\u\m\b\a\3\s\j\l\2\g\0\b\2\n\g\x\8\j\f\y\z\e\j\q\v\i\o\t\1\x\i\b\x\1\8\0\1\q\t\v\w\a\r\y\q\2\s\a\m\b\2\v\1\j\x\k\p\w\q\0\5\0\0\z\t\c\b\o\v\r\h\y\a\g\y\n\k\3\l\h\6\1\z\5\1\c\7\k\m\q\7\3\j\f\j\4\w\o\6\s\2\q\2\8\a\f\9\p\9\m\4\5\c\l\u\a\u\x\m\4\9\g\o\r\o\t\k\u\1\9\u\r\y\u\k\u\v\k\v\u\x\9\b\6\v\i\u\l\1\9\y\e\p\7\z\x\j\q\t\9\i\u\q\w\o\6\8\u\2\t\r\b\9\5\b\d\l\6\h\n\6\e\y\u\c\9\d\j\m\i\4\0\f\0\y\f\i\o\n\6\n\d\p\2\x\l\p\i\v\o\q\e\5\z\c\b\3\q\p\8\9\g\w\4\c\b\g\h\5\v\n\9\8\h\p\t\e\v\1\t\7\6\v\l\z\8\m\7\5\k\1\4\d\2\2\p\f\k\m\h\p\g\w\7\b\1\z ]] 00:07:06.791 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.791 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:06.791 [2024-07-15 16:21:52.185099] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:06.791 [2024-07-15 16:21:52.185196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63444 ] 00:07:06.791 [2024-07-15 16:21:52.318424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.101 [2024-07-15 16:21:52.423509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.101 [2024-07-15 16:21:52.479918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.360  Copying: 512/512 [B] (average 500 kBps) 00:07:07.360 00:07:07.360 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qu7h1loz7fi9o193ayquxjqch3yr3w06bpfqg7brpkgpnijqtzp4krzrjpl44hhskcj9f9auqyi3xfn8evf932n969s4sfg9gsv5e9rkin53xm9douq8vplb9ydbsb963hw410us8ink1myyphro37hveybuikinzv5l9ytqf3ztnxwre4ino22r8plizo0dci951w5fmok4dg8ah4dz10oa03glwmtmboafji3ua9y51l4eeft3gumba3sjl2g0b2ngx8jfyzejqviot1xibx1801qtvwaryq2samb2v1jxkpwq0500ztcbovrhyagynk3lh61z51c7kmq73jfj4wo6s2q28af9p9m45cluauxm49gorotku19uryukuvkvux9b6viul19yep7zxjqt9iuqwo68u2trb95bdl6hn6eyuc9djmi40f0yfion6ndp2xlpivoqe5zcb3qp89gw4cbgh5vn98hptev1t76vlz8m75k14d22pfkmhpgw7b1z == \q\u\7\h\1\l\o\z\7\f\i\9\o\1\9\3\a\y\q\u\x\j\q\c\h\3\y\r\3\w\0\6\b\p\f\q\g\7\b\r\p\k\g\p\n\i\j\q\t\z\p\4\k\r\z\r\j\p\l\4\4\h\h\s\k\c\j\9\f\9\a\u\q\y\i\3\x\f\n\8\e\v\f\9\3\2\n\9\6\9\s\4\s\f\g\9\g\s\v\5\e\9\r\k\i\n\5\3\x\m\9\d\o\u\q\8\v\p\l\b\9\y\d\b\s\b\9\6\3\h\w\4\1\0\u\s\8\i\n\k\1\m\y\y\p\h\r\o\3\7\h\v\e\y\b\u\i\k\i\n\z\v\5\l\9\y\t\q\f\3\z\t\n\x\w\r\e\4\i\n\o\2\2\r\8\p\l\i\z\o\0\d\c\i\9\5\1\w\5\f\m\o\k\4\d\g\8\a\h\4\d\z\1\0\o\a\0\3\g\l\w\m\t\m\b\o\a\f\j\i\3\u\a\9\y\5\1\l\4\e\e\f\t\3\g\u\m\b\a\3\s\j\l\2\g\0\b\2\n\g\x\8\j\f\y\z\e\j\q\v\i\o\t\1\x\i\b\x\1\8\0\1\q\t\v\w\a\r\y\q\2\s\a\m\b\2\v\1\j\x\k\p\w\q\0\5\0\0\z\t\c\b\o\v\r\h\y\a\g\y\n\k\3\l\h\6\1\z\5\1\c\7\k\m\q\7\3\j\f\j\4\w\o\6\s\2\q\2\8\a\f\9\p\9\m\4\5\c\l\u\a\u\x\m\4\9\g\o\r\o\t\k\u\1\9\u\r\y\u\k\u\v\k\v\u\x\9\b\6\v\i\u\l\1\9\y\e\p\7\z\x\j\q\t\9\i\u\q\w\o\6\8\u\2\t\r\b\9\5\b\d\l\6\h\n\6\e\y\u\c\9\d\j\m\i\4\0\f\0\y\f\i\o\n\6\n\d\p\2\x\l\p\i\v\o\q\e\5\z\c\b\3\q\p\8\9\g\w\4\c\b\g\h\5\v\n\9\8\h\p\t\e\v\1\t\7\6\v\l\z\8\m\7\5\k\1\4\d\2\2\p\f\k\m\h\p\g\w\7\b\1\z ]] 00:07:07.360 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:07.360 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:07.360 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:07.360 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.360 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.360 16:21:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:07.360 [2024-07-15 16:21:52.818460] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:07.360 [2024-07-15 16:21:52.818906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63457 ] 00:07:07.619 [2024-07-15 16:21:52.956200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.619 [2024-07-15 16:21:53.067972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.619 [2024-07-15 16:21:53.121942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.886  Copying: 512/512 [B] (average 500 kBps) 00:07:07.886 00:07:07.886 16:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gh7setf9mwqixplkntbh9rzowswhhc810e5doxbmbw6mmfev8dp4s7xuiv9j9o45fy50yrgq5k72cett0nllxgdq3k8r512rwjx11iokb9hhgto1ls1oyzz0sdvaeicc5vt9d7jj3domwfindm467v0ysonn7yk64jqhhkln5dttfh1uccdit3a31y2gs598mnn29nxlt4pb6h9abi7b7h4v7594jmb3xvjtxlpe5w8tnufnwtxvrz75vemkzesqmzdloi3tldnqaneedtiy3eebhu7cphn6iihed6i3dvrs5wwpvnkla90k01ertaszzsbq0pujk31xrzbcplni57qjxzi2u325no168y21fdefz0r3lpiokw4dpgxvhvip29eld4gnk0taz14jqgmu3rhr47vptlfq0ovn4ir49956epv7fbms6lhny1s7pbydj524hjgsc13tfnn7vwff35n2qrj93w7gctmvt7uizvm6c073dl9rnji7pupf71n5 == \g\h\7\s\e\t\f\9\m\w\q\i\x\p\l\k\n\t\b\h\9\r\z\o\w\s\w\h\h\c\8\1\0\e\5\d\o\x\b\m\b\w\6\m\m\f\e\v\8\d\p\4\s\7\x\u\i\v\9\j\9\o\4\5\f\y\5\0\y\r\g\q\5\k\7\2\c\e\t\t\0\n\l\l\x\g\d\q\3\k\8\r\5\1\2\r\w\j\x\1\1\i\o\k\b\9\h\h\g\t\o\1\l\s\1\o\y\z\z\0\s\d\v\a\e\i\c\c\5\v\t\9\d\7\j\j\3\d\o\m\w\f\i\n\d\m\4\6\7\v\0\y\s\o\n\n\7\y\k\6\4\j\q\h\h\k\l\n\5\d\t\t\f\h\1\u\c\c\d\i\t\3\a\3\1\y\2\g\s\5\9\8\m\n\n\2\9\n\x\l\t\4\p\b\6\h\9\a\b\i\7\b\7\h\4\v\7\5\9\4\j\m\b\3\x\v\j\t\x\l\p\e\5\w\8\t\n\u\f\n\w\t\x\v\r\z\7\5\v\e\m\k\z\e\s\q\m\z\d\l\o\i\3\t\l\d\n\q\a\n\e\e\d\t\i\y\3\e\e\b\h\u\7\c\p\h\n\6\i\i\h\e\d\6\i\3\d\v\r\s\5\w\w\p\v\n\k\l\a\9\0\k\0\1\e\r\t\a\s\z\z\s\b\q\0\p\u\j\k\3\1\x\r\z\b\c\p\l\n\i\5\7\q\j\x\z\i\2\u\3\2\5\n\o\1\6\8\y\2\1\f\d\e\f\z\0\r\3\l\p\i\o\k\w\4\d\p\g\x\v\h\v\i\p\2\9\e\l\d\4\g\n\k\0\t\a\z\1\4\j\q\g\m\u\3\r\h\r\4\7\v\p\t\l\f\q\0\o\v\n\4\i\r\4\9\9\5\6\e\p\v\7\f\b\m\s\6\l\h\n\y\1\s\7\p\b\y\d\j\5\2\4\h\j\g\s\c\1\3\t\f\n\n\7\v\w\f\f\3\5\n\2\q\r\j\9\3\w\7\g\c\t\m\v\t\7\u\i\z\v\m\6\c\0\7\3\d\l\9\r\n\j\i\7\p\u\p\f\7\1\n\5 ]] 00:07:07.886 16:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.886 16:21:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:08.144 [2024-07-15 16:21:53.442530] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:08.144 [2024-07-15 16:21:53.442625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63465 ] 00:07:08.144 [2024-07-15 16:21:53.577229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.144 [2024-07-15 16:21:53.687249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.403 [2024-07-15 16:21:53.742035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.662  Copying: 512/512 [B] (average 500 kBps) 00:07:08.662 00:07:08.662 16:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gh7setf9mwqixplkntbh9rzowswhhc810e5doxbmbw6mmfev8dp4s7xuiv9j9o45fy50yrgq5k72cett0nllxgdq3k8r512rwjx11iokb9hhgto1ls1oyzz0sdvaeicc5vt9d7jj3domwfindm467v0ysonn7yk64jqhhkln5dttfh1uccdit3a31y2gs598mnn29nxlt4pb6h9abi7b7h4v7594jmb3xvjtxlpe5w8tnufnwtxvrz75vemkzesqmzdloi3tldnqaneedtiy3eebhu7cphn6iihed6i3dvrs5wwpvnkla90k01ertaszzsbq0pujk31xrzbcplni57qjxzi2u325no168y21fdefz0r3lpiokw4dpgxvhvip29eld4gnk0taz14jqgmu3rhr47vptlfq0ovn4ir49956epv7fbms6lhny1s7pbydj524hjgsc13tfnn7vwff35n2qrj93w7gctmvt7uizvm6c073dl9rnji7pupf71n5 == \g\h\7\s\e\t\f\9\m\w\q\i\x\p\l\k\n\t\b\h\9\r\z\o\w\s\w\h\h\c\8\1\0\e\5\d\o\x\b\m\b\w\6\m\m\f\e\v\8\d\p\4\s\7\x\u\i\v\9\j\9\o\4\5\f\y\5\0\y\r\g\q\5\k\7\2\c\e\t\t\0\n\l\l\x\g\d\q\3\k\8\r\5\1\2\r\w\j\x\1\1\i\o\k\b\9\h\h\g\t\o\1\l\s\1\o\y\z\z\0\s\d\v\a\e\i\c\c\5\v\t\9\d\7\j\j\3\d\o\m\w\f\i\n\d\m\4\6\7\v\0\y\s\o\n\n\7\y\k\6\4\j\q\h\h\k\l\n\5\d\t\t\f\h\1\u\c\c\d\i\t\3\a\3\1\y\2\g\s\5\9\8\m\n\n\2\9\n\x\l\t\4\p\b\6\h\9\a\b\i\7\b\7\h\4\v\7\5\9\4\j\m\b\3\x\v\j\t\x\l\p\e\5\w\8\t\n\u\f\n\w\t\x\v\r\z\7\5\v\e\m\k\z\e\s\q\m\z\d\l\o\i\3\t\l\d\n\q\a\n\e\e\d\t\i\y\3\e\e\b\h\u\7\c\p\h\n\6\i\i\h\e\d\6\i\3\d\v\r\s\5\w\w\p\v\n\k\l\a\9\0\k\0\1\e\r\t\a\s\z\z\s\b\q\0\p\u\j\k\3\1\x\r\z\b\c\p\l\n\i\5\7\q\j\x\z\i\2\u\3\2\5\n\o\1\6\8\y\2\1\f\d\e\f\z\0\r\3\l\p\i\o\k\w\4\d\p\g\x\v\h\v\i\p\2\9\e\l\d\4\g\n\k\0\t\a\z\1\4\j\q\g\m\u\3\r\h\r\4\7\v\p\t\l\f\q\0\o\v\n\4\i\r\4\9\9\5\6\e\p\v\7\f\b\m\s\6\l\h\n\y\1\s\7\p\b\y\d\j\5\2\4\h\j\g\s\c\1\3\t\f\n\n\7\v\w\f\f\3\5\n\2\q\r\j\9\3\w\7\g\c\t\m\v\t\7\u\i\z\v\m\6\c\0\7\3\d\l\9\r\n\j\i\7\p\u\p\f\7\1\n\5 ]] 00:07:08.662 16:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.662 16:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:08.662 [2024-07-15 16:21:54.082819] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:08.662 [2024-07-15 16:21:54.082929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63472 ] 00:07:08.921 [2024-07-15 16:21:54.219724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.921 [2024-07-15 16:21:54.323656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.921 [2024-07-15 16:21:54.378654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.181  Copying: 512/512 [B] (average 500 kBps) 00:07:09.181 00:07:09.181 16:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gh7setf9mwqixplkntbh9rzowswhhc810e5doxbmbw6mmfev8dp4s7xuiv9j9o45fy50yrgq5k72cett0nllxgdq3k8r512rwjx11iokb9hhgto1ls1oyzz0sdvaeicc5vt9d7jj3domwfindm467v0ysonn7yk64jqhhkln5dttfh1uccdit3a31y2gs598mnn29nxlt4pb6h9abi7b7h4v7594jmb3xvjtxlpe5w8tnufnwtxvrz75vemkzesqmzdloi3tldnqaneedtiy3eebhu7cphn6iihed6i3dvrs5wwpvnkla90k01ertaszzsbq0pujk31xrzbcplni57qjxzi2u325no168y21fdefz0r3lpiokw4dpgxvhvip29eld4gnk0taz14jqgmu3rhr47vptlfq0ovn4ir49956epv7fbms6lhny1s7pbydj524hjgsc13tfnn7vwff35n2qrj93w7gctmvt7uizvm6c073dl9rnji7pupf71n5 == \g\h\7\s\e\t\f\9\m\w\q\i\x\p\l\k\n\t\b\h\9\r\z\o\w\s\w\h\h\c\8\1\0\e\5\d\o\x\b\m\b\w\6\m\m\f\e\v\8\d\p\4\s\7\x\u\i\v\9\j\9\o\4\5\f\y\5\0\y\r\g\q\5\k\7\2\c\e\t\t\0\n\l\l\x\g\d\q\3\k\8\r\5\1\2\r\w\j\x\1\1\i\o\k\b\9\h\h\g\t\o\1\l\s\1\o\y\z\z\0\s\d\v\a\e\i\c\c\5\v\t\9\d\7\j\j\3\d\o\m\w\f\i\n\d\m\4\6\7\v\0\y\s\o\n\n\7\y\k\6\4\j\q\h\h\k\l\n\5\d\t\t\f\h\1\u\c\c\d\i\t\3\a\3\1\y\2\g\s\5\9\8\m\n\n\2\9\n\x\l\t\4\p\b\6\h\9\a\b\i\7\b\7\h\4\v\7\5\9\4\j\m\b\3\x\v\j\t\x\l\p\e\5\w\8\t\n\u\f\n\w\t\x\v\r\z\7\5\v\e\m\k\z\e\s\q\m\z\d\l\o\i\3\t\l\d\n\q\a\n\e\e\d\t\i\y\3\e\e\b\h\u\7\c\p\h\n\6\i\i\h\e\d\6\i\3\d\v\r\s\5\w\w\p\v\n\k\l\a\9\0\k\0\1\e\r\t\a\s\z\z\s\b\q\0\p\u\j\k\3\1\x\r\z\b\c\p\l\n\i\5\7\q\j\x\z\i\2\u\3\2\5\n\o\1\6\8\y\2\1\f\d\e\f\z\0\r\3\l\p\i\o\k\w\4\d\p\g\x\v\h\v\i\p\2\9\e\l\d\4\g\n\k\0\t\a\z\1\4\j\q\g\m\u\3\r\h\r\4\7\v\p\t\l\f\q\0\o\v\n\4\i\r\4\9\9\5\6\e\p\v\7\f\b\m\s\6\l\h\n\y\1\s\7\p\b\y\d\j\5\2\4\h\j\g\s\c\1\3\t\f\n\n\7\v\w\f\f\3\5\n\2\q\r\j\9\3\w\7\g\c\t\m\v\t\7\u\i\z\v\m\6\c\0\7\3\d\l\9\r\n\j\i\7\p\u\p\f\7\1\n\5 ]] 00:07:09.181 16:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:09.181 16:21:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:09.181 [2024-07-15 16:21:54.709115] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:09.181 [2024-07-15 16:21:54.709205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63485 ] 00:07:09.440 [2024-07-15 16:21:54.847366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.440 [2024-07-15 16:21:54.941345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.700 [2024-07-15 16:21:54.997281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.959  Copying: 512/512 [B] (average 500 kBps) 00:07:09.959 00:07:09.959 16:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gh7setf9mwqixplkntbh9rzowswhhc810e5doxbmbw6mmfev8dp4s7xuiv9j9o45fy50yrgq5k72cett0nllxgdq3k8r512rwjx11iokb9hhgto1ls1oyzz0sdvaeicc5vt9d7jj3domwfindm467v0ysonn7yk64jqhhkln5dttfh1uccdit3a31y2gs598mnn29nxlt4pb6h9abi7b7h4v7594jmb3xvjtxlpe5w8tnufnwtxvrz75vemkzesqmzdloi3tldnqaneedtiy3eebhu7cphn6iihed6i3dvrs5wwpvnkla90k01ertaszzsbq0pujk31xrzbcplni57qjxzi2u325no168y21fdefz0r3lpiokw4dpgxvhvip29eld4gnk0taz14jqgmu3rhr47vptlfq0ovn4ir49956epv7fbms6lhny1s7pbydj524hjgsc13tfnn7vwff35n2qrj93w7gctmvt7uizvm6c073dl9rnji7pupf71n5 == \g\h\7\s\e\t\f\9\m\w\q\i\x\p\l\k\n\t\b\h\9\r\z\o\w\s\w\h\h\c\8\1\0\e\5\d\o\x\b\m\b\w\6\m\m\f\e\v\8\d\p\4\s\7\x\u\i\v\9\j\9\o\4\5\f\y\5\0\y\r\g\q\5\k\7\2\c\e\t\t\0\n\l\l\x\g\d\q\3\k\8\r\5\1\2\r\w\j\x\1\1\i\o\k\b\9\h\h\g\t\o\1\l\s\1\o\y\z\z\0\s\d\v\a\e\i\c\c\5\v\t\9\d\7\j\j\3\d\o\m\w\f\i\n\d\m\4\6\7\v\0\y\s\o\n\n\7\y\k\6\4\j\q\h\h\k\l\n\5\d\t\t\f\h\1\u\c\c\d\i\t\3\a\3\1\y\2\g\s\5\9\8\m\n\n\2\9\n\x\l\t\4\p\b\6\h\9\a\b\i\7\b\7\h\4\v\7\5\9\4\j\m\b\3\x\v\j\t\x\l\p\e\5\w\8\t\n\u\f\n\w\t\x\v\r\z\7\5\v\e\m\k\z\e\s\q\m\z\d\l\o\i\3\t\l\d\n\q\a\n\e\e\d\t\i\y\3\e\e\b\h\u\7\c\p\h\n\6\i\i\h\e\d\6\i\3\d\v\r\s\5\w\w\p\v\n\k\l\a\9\0\k\0\1\e\r\t\a\s\z\z\s\b\q\0\p\u\j\k\3\1\x\r\z\b\c\p\l\n\i\5\7\q\j\x\z\i\2\u\3\2\5\n\o\1\6\8\y\2\1\f\d\e\f\z\0\r\3\l\p\i\o\k\w\4\d\p\g\x\v\h\v\i\p\2\9\e\l\d\4\g\n\k\0\t\a\z\1\4\j\q\g\m\u\3\r\h\r\4\7\v\p\t\l\f\q\0\o\v\n\4\i\r\4\9\9\5\6\e\p\v\7\f\b\m\s\6\l\h\n\y\1\s\7\p\b\y\d\j\5\2\4\h\j\g\s\c\1\3\t\f\n\n\7\v\w\f\f\3\5\n\2\q\r\j\9\3\w\7\g\c\t\m\v\t\7\u\i\z\v\m\6\c\0\7\3\d\l\9\r\n\j\i\7\p\u\p\f\7\1\n\5 ]] 00:07:09.959 00:07:09.959 real 0m5.067s 00:07:09.959 user 0m2.888s 00:07:09.959 sys 0m1.193s 00:07:09.959 16:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.959 ************************************ 00:07:09.959 END TEST dd_flags_misc_forced_aio 00:07:09.959 16:21:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.959 ************************************ 00:07:09.959 16:21:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:09.959 16:21:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:09.959 16:21:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:09.960 16:21:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:09.960 ************************************ 00:07:09.960 END TEST spdk_dd_posix 00:07:09.960 ************************************ 00:07:09.960 00:07:09.960 real 0m22.362s 00:07:09.960 user 0m11.415s 00:07:09.960 sys 0m6.898s 00:07:09.960 16:21:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.960 16:21:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:09.960 16:21:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:09.960 16:21:55 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:09.960 16:21:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.960 16:21:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.960 16:21:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:09.960 ************************************ 00:07:09.960 START TEST spdk_dd_malloc 00:07:09.960 ************************************ 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:09.960 * Looking for test storage... 00:07:09.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:09.960 ************************************ 00:07:09.960 START TEST dd_malloc_copy 00:07:09.960 ************************************ 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:09.960 16:21:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:10.219 [2024-07-15 16:21:55.522572] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:10.219 [2024-07-15 16:21:55.522666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63554 ] 00:07:10.219 { 00:07:10.219 "subsystems": [ 00:07:10.219 { 00:07:10.219 "subsystem": "bdev", 00:07:10.219 "config": [ 00:07:10.219 { 00:07:10.219 "params": { 00:07:10.219 "block_size": 512, 00:07:10.219 "num_blocks": 1048576, 00:07:10.219 "name": "malloc0" 00:07:10.219 }, 00:07:10.219 "method": "bdev_malloc_create" 00:07:10.219 }, 00:07:10.219 { 00:07:10.219 "params": { 00:07:10.219 "block_size": 512, 00:07:10.219 "num_blocks": 1048576, 00:07:10.219 "name": "malloc1" 00:07:10.219 }, 00:07:10.219 "method": "bdev_malloc_create" 00:07:10.219 }, 00:07:10.219 { 00:07:10.219 "method": "bdev_wait_for_examine" 00:07:10.219 } 00:07:10.219 ] 00:07:10.219 } 00:07:10.219 ] 00:07:10.219 } 00:07:10.219 [2024-07-15 16:21:55.662540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.478 [2024-07-15 16:21:55.771181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.478 [2024-07-15 16:21:55.825756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.988  Copying: 212/512 [MB] (212 MBps) Copying: 420/512 [MB] (208 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:07:13.988 00:07:13.988 16:21:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:13.988 16:21:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:13.988 16:21:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.988 16:21:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.988 [2024-07-15 16:21:59.273476] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:13.988 [2024-07-15 16:21:59.274790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63601 ] 00:07:13.988 { 00:07:13.988 "subsystems": [ 00:07:13.988 { 00:07:13.988 "subsystem": "bdev", 00:07:13.988 "config": [ 00:07:13.988 { 00:07:13.988 "params": { 00:07:13.988 "block_size": 512, 00:07:13.988 "num_blocks": 1048576, 00:07:13.988 "name": "malloc0" 00:07:13.988 }, 00:07:13.988 "method": "bdev_malloc_create" 00:07:13.988 }, 00:07:13.988 { 00:07:13.988 "params": { 00:07:13.988 "block_size": 512, 00:07:13.988 "num_blocks": 1048576, 00:07:13.988 "name": "malloc1" 00:07:13.988 }, 00:07:13.988 "method": "bdev_malloc_create" 00:07:13.988 }, 00:07:13.988 { 00:07:13.988 "method": "bdev_wait_for_examine" 00:07:13.988 } 00:07:13.988 ] 00:07:13.988 } 00:07:13.988 ] 00:07:13.989 } 00:07:13.989 [2024-07-15 16:21:59.414801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.989 [2024-07-15 16:21:59.528132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.248 [2024-07-15 16:21:59.583566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.791  Copying: 206/512 [MB] (206 MBps) Copying: 417/512 [MB] (211 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:07:17.791 00:07:17.791 ************************************ 00:07:17.791 END TEST dd_malloc_copy 00:07:17.791 ************************************ 00:07:17.791 00:07:17.791 real 0m7.515s 00:07:17.791 user 0m6.523s 00:07:17.791 sys 0m0.840s 00:07:17.791 16:22:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.791 16:22:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.791 16:22:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:17.791 ************************************ 00:07:17.791 END TEST spdk_dd_malloc 00:07:17.791 ************************************ 00:07:17.791 00:07:17.791 real 0m7.655s 00:07:17.791 user 0m6.574s 00:07:17.791 sys 0m0.927s 00:07:17.791 16:22:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.791 16:22:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:17.791 16:22:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:17.791 16:22:03 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:17.791 16:22:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:17.791 16:22:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.791 16:22:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:17.791 ************************************ 00:07:17.791 START TEST spdk_dd_bdev_to_bdev 00:07:17.791 ************************************ 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:17.791 * Looking for test storage... 00:07:17.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:17.791 ************************************ 00:07:17.791 START TEST dd_inflate_file 00:07:17.791 ************************************ 00:07:17.791 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:17.791 [2024-07-15 16:22:03.233923] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:17.791 [2024-07-15 16:22:03.234722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63711 ] 00:07:18.052 [2024-07-15 16:22:03.374655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.052 [2024-07-15 16:22:03.489663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.052 [2024-07-15 16:22:03.543271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.310  Copying: 64/64 [MB] (average 1422 MBps) 00:07:18.310 00:07:18.310 ************************************ 00:07:18.310 END TEST dd_inflate_file 00:07:18.310 ************************************ 00:07:18.310 00:07:18.310 real 0m0.670s 00:07:18.310 user 0m0.411s 00:07:18.310 sys 0m0.320s 00:07:18.310 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.310 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:18.569 ************************************ 00:07:18.569 START TEST dd_copy_to_out_bdev 00:07:18.569 ************************************ 00:07:18.569 16:22:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:18.569 { 00:07:18.569 "subsystems": [ 00:07:18.569 { 00:07:18.569 "subsystem": "bdev", 00:07:18.569 "config": [ 00:07:18.569 { 00:07:18.569 "params": { 00:07:18.569 "trtype": "pcie", 00:07:18.569 "traddr": "0000:00:10.0", 00:07:18.569 "name": "Nvme0" 00:07:18.569 }, 00:07:18.569 "method": "bdev_nvme_attach_controller" 00:07:18.569 }, 00:07:18.569 { 00:07:18.569 "params": { 00:07:18.569 "trtype": "pcie", 00:07:18.569 "traddr": "0000:00:11.0", 00:07:18.569 "name": "Nvme1" 00:07:18.569 }, 00:07:18.569 "method": "bdev_nvme_attach_controller" 00:07:18.569 }, 00:07:18.569 { 00:07:18.569 "method": "bdev_wait_for_examine" 00:07:18.569 } 00:07:18.569 ] 00:07:18.569 } 00:07:18.569 ] 00:07:18.569 } 00:07:18.569 [2024-07-15 16:22:03.959769] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:18.569 [2024-07-15 16:22:03.959892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63745 ] 00:07:18.569 [2024-07-15 16:22:04.098091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.828 [2024-07-15 16:22:04.209112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.828 [2024-07-15 16:22:04.264962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.463  Copying: 59/64 [MB] (59 MBps) Copying: 64/64 [MB] (average 59 MBps) 00:07:20.463 00:07:20.463 00:07:20.463 real 0m1.861s 00:07:20.463 user 0m1.616s 00:07:20.463 sys 0m1.431s 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.463 ************************************ 00:07:20.463 END TEST dd_copy_to_out_bdev 00:07:20.463 ************************************ 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.463 ************************************ 00:07:20.463 START TEST dd_offset_magic 00:07:20.463 ************************************ 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:20.463 16:22:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:20.463 [2024-07-15 16:22:05.873767] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:20.463 [2024-07-15 16:22:05.873928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63790 ] 00:07:20.463 { 00:07:20.463 "subsystems": [ 00:07:20.463 { 00:07:20.463 "subsystem": "bdev", 00:07:20.463 "config": [ 00:07:20.463 { 00:07:20.463 "params": { 00:07:20.464 "trtype": "pcie", 00:07:20.464 "traddr": "0000:00:10.0", 00:07:20.464 "name": "Nvme0" 00:07:20.464 }, 00:07:20.464 "method": "bdev_nvme_attach_controller" 00:07:20.464 }, 00:07:20.464 { 00:07:20.464 "params": { 00:07:20.464 "trtype": "pcie", 00:07:20.464 "traddr": "0000:00:11.0", 00:07:20.464 "name": "Nvme1" 00:07:20.464 }, 00:07:20.464 "method": "bdev_nvme_attach_controller" 00:07:20.464 }, 00:07:20.464 { 00:07:20.464 "method": "bdev_wait_for_examine" 00:07:20.464 } 00:07:20.464 ] 00:07:20.464 } 00:07:20.464 ] 00:07:20.464 } 00:07:20.722 [2024-07-15 16:22:06.013680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.722 [2024-07-15 16:22:06.116167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.722 [2024-07-15 16:22:06.169701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.239  Copying: 65/65 [MB] (average 942 MBps) 00:07:21.239 00:07:21.239 16:22:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:21.239 16:22:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:21.239 16:22:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:21.239 16:22:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:21.239 [2024-07-15 16:22:06.732565] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:21.239 [2024-07-15 16:22:06.732661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63810 ] 00:07:21.239 { 00:07:21.239 "subsystems": [ 00:07:21.239 { 00:07:21.239 "subsystem": "bdev", 00:07:21.239 "config": [ 00:07:21.239 { 00:07:21.239 "params": { 00:07:21.239 "trtype": "pcie", 00:07:21.239 "traddr": "0000:00:10.0", 00:07:21.239 "name": "Nvme0" 00:07:21.239 }, 00:07:21.239 "method": "bdev_nvme_attach_controller" 00:07:21.239 }, 00:07:21.239 { 00:07:21.239 "params": { 00:07:21.239 "trtype": "pcie", 00:07:21.239 "traddr": "0000:00:11.0", 00:07:21.239 "name": "Nvme1" 00:07:21.239 }, 00:07:21.239 "method": "bdev_nvme_attach_controller" 00:07:21.239 }, 00:07:21.239 { 00:07:21.239 "method": "bdev_wait_for_examine" 00:07:21.239 } 00:07:21.239 ] 00:07:21.239 } 00:07:21.239 ] 00:07:21.239 } 00:07:21.496 [2024-07-15 16:22:06.866451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.496 [2024-07-15 16:22:06.962847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.496 [2024-07-15 16:22:07.017816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.014  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:22.014 00:07:22.014 16:22:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:22.014 16:22:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:22.014 16:22:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:22.014 16:22:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:22.014 16:22:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:22.014 16:22:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:22.014 16:22:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:22.014 [2024-07-15 16:22:07.453017] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:22.014 [2024-07-15 16:22:07.453107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63826 ] 00:07:22.014 { 00:07:22.014 "subsystems": [ 00:07:22.014 { 00:07:22.014 "subsystem": "bdev", 00:07:22.014 "config": [ 00:07:22.014 { 00:07:22.014 "params": { 00:07:22.014 "trtype": "pcie", 00:07:22.014 "traddr": "0000:00:10.0", 00:07:22.014 "name": "Nvme0" 00:07:22.014 }, 00:07:22.014 "method": "bdev_nvme_attach_controller" 00:07:22.014 }, 00:07:22.014 { 00:07:22.014 "params": { 00:07:22.014 "trtype": "pcie", 00:07:22.014 "traddr": "0000:00:11.0", 00:07:22.014 "name": "Nvme1" 00:07:22.014 }, 00:07:22.014 "method": "bdev_nvme_attach_controller" 00:07:22.014 }, 00:07:22.014 { 00:07:22.014 "method": "bdev_wait_for_examine" 00:07:22.014 } 00:07:22.014 ] 00:07:22.014 } 00:07:22.014 ] 00:07:22.014 } 00:07:22.273 [2024-07-15 16:22:07.585637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.273 [2024-07-15 16:22:07.696068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.273 [2024-07-15 16:22:07.751099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.790  Copying: 65/65 [MB] (average 1000 MBps) 00:07:22.790 00:07:22.790 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:22.790 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:22.790 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:22.790 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:22.790 [2024-07-15 16:22:08.287888] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:22.790 [2024-07-15 16:22:08.287988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63841 ] 00:07:22.790 { 00:07:22.790 "subsystems": [ 00:07:22.790 { 00:07:22.790 "subsystem": "bdev", 00:07:22.790 "config": [ 00:07:22.790 { 00:07:22.790 "params": { 00:07:22.790 "trtype": "pcie", 00:07:22.790 "traddr": "0000:00:10.0", 00:07:22.790 "name": "Nvme0" 00:07:22.790 }, 00:07:22.790 "method": "bdev_nvme_attach_controller" 00:07:22.790 }, 00:07:22.790 { 00:07:22.790 "params": { 00:07:22.790 "trtype": "pcie", 00:07:22.790 "traddr": "0000:00:11.0", 00:07:22.790 "name": "Nvme1" 00:07:22.790 }, 00:07:22.790 "method": "bdev_nvme_attach_controller" 00:07:22.790 }, 00:07:22.790 { 00:07:22.790 "method": "bdev_wait_for_examine" 00:07:22.790 } 00:07:22.790 ] 00:07:22.790 } 00:07:22.790 ] 00:07:22.790 } 00:07:23.048 [2024-07-15 16:22:08.420609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.048 [2024-07-15 16:22:08.516135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.048 [2024-07-15 16:22:08.572995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.574  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:23.574 00:07:23.574 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:23.574 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:23.574 00:07:23.574 real 0m3.143s 00:07:23.574 user 0m2.294s 00:07:23.574 sys 0m0.923s 00:07:23.574 ************************************ 00:07:23.574 END TEST dd_offset_magic 00:07:23.574 ************************************ 00:07:23.574 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.574 16:22:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:23.574 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:23.574 [2024-07-15 16:22:09.065015] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:23.574 [2024-07-15 16:22:09.065112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63878 ] 00:07:23.574 { 00:07:23.574 "subsystems": [ 00:07:23.574 { 00:07:23.574 "subsystem": "bdev", 00:07:23.574 "config": [ 00:07:23.574 { 00:07:23.574 "params": { 00:07:23.574 "trtype": "pcie", 00:07:23.574 "traddr": "0000:00:10.0", 00:07:23.574 "name": "Nvme0" 00:07:23.574 }, 00:07:23.574 "method": "bdev_nvme_attach_controller" 00:07:23.574 }, 00:07:23.574 { 00:07:23.574 "params": { 00:07:23.574 "trtype": "pcie", 00:07:23.574 "traddr": "0000:00:11.0", 00:07:23.574 "name": "Nvme1" 00:07:23.574 }, 00:07:23.574 "method": "bdev_nvme_attach_controller" 00:07:23.574 }, 00:07:23.574 { 00:07:23.574 "method": "bdev_wait_for_examine" 00:07:23.574 } 00:07:23.574 ] 00:07:23.574 } 00:07:23.574 ] 00:07:23.574 } 00:07:23.833 [2024-07-15 16:22:09.202772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.833 [2024-07-15 16:22:09.299593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.833 [2024-07-15 16:22:09.354675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.350  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:24.350 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:24.350 16:22:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:24.350 [2024-07-15 16:22:09.798525] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:24.350 [2024-07-15 16:22:09.798816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63899 ] 00:07:24.350 { 00:07:24.350 "subsystems": [ 00:07:24.350 { 00:07:24.350 "subsystem": "bdev", 00:07:24.350 "config": [ 00:07:24.350 { 00:07:24.350 "params": { 00:07:24.350 "trtype": "pcie", 00:07:24.350 "traddr": "0000:00:10.0", 00:07:24.350 "name": "Nvme0" 00:07:24.350 }, 00:07:24.350 "method": "bdev_nvme_attach_controller" 00:07:24.350 }, 00:07:24.350 { 00:07:24.350 "params": { 00:07:24.350 "trtype": "pcie", 00:07:24.350 "traddr": "0000:00:11.0", 00:07:24.350 "name": "Nvme1" 00:07:24.350 }, 00:07:24.350 "method": "bdev_nvme_attach_controller" 00:07:24.350 }, 00:07:24.350 { 00:07:24.350 "method": "bdev_wait_for_examine" 00:07:24.350 } 00:07:24.350 ] 00:07:24.350 } 00:07:24.350 ] 00:07:24.350 } 00:07:24.642 [2024-07-15 16:22:09.937581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.642 [2024-07-15 16:22:10.048784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.642 [2024-07-15 16:22:10.103541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.163  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:25.163 00:07:25.163 16:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:25.163 ************************************ 00:07:25.163 END TEST spdk_dd_bdev_to_bdev 00:07:25.163 ************************************ 00:07:25.163 00:07:25.163 real 0m7.430s 00:07:25.163 user 0m5.461s 00:07:25.163 sys 0m3.389s 00:07:25.163 16:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.163 16:22:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:25.163 16:22:10 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:25.163 16:22:10 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:25.163 16:22:10 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:25.164 16:22:10 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.164 16:22:10 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.164 16:22:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 ************************************ 00:07:25.164 START TEST spdk_dd_uring 00:07:25.164 ************************************ 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:25.164 * Looking for test storage... 00:07:25.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 ************************************ 00:07:25.164 START TEST dd_uring_copy 00:07:25.164 ************************************ 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=6676ea19xzgq0zddy1sf0no72rwt24fsvg09wawgovd323j37fzb2r8rankf9fwmx9kq9wcos0u92xkumbphwtcdqqh36ffey9vh99ky4yqjwwues8es2mmpb6qetba1tz0xn7zu0pfcvj779svnvidgfw6o717et31wed76nbj3qo3seds7va7u87vm82okwpj7j5i4wu1kirwyct0r05535kdpxcmbwf5yn2oodl1ywwlld4k3upg4vi0mtsgoe75fdtmq8sbu0ev694rii3ir8tfhs8a1cs7ynyqy5l6vg4bev68qamont6q188znfbnfsvm1383umo9bgi98qxrf68ayu7tnltfcvddn9bor2v50rw5rd46hmtjgqdwtdssv92f1j3gzpfrwc5sjcfsb7ef6qxidgxt1pfy8oy948x1qc27iqhywz0ks4x1c3i3zez87rg8cdch2a354b7joopb8rslrtrs26z242349dpt0zktjjq3n6e34xkcezvuon02fe65ikk7g80s9gl1msmem769po6k7ez2vrkqjajbnmkedqcoi2o16xqyprim3pvnf1xxsg81erpzknlj4dkyenezma8vjsuwk3fffkj9fn07ts4nw1ahjllpr54d04m7hy83q55aaev4zcvl9mu85e5mpo5je7bxx4cxrld3hxlveyrvt8uqabvw60jk2000tpzfiuizegunm2djniro9xf78rktsjwvm6xjrca7yprt6294zer5dtllcxjvij79jc8xtvw1ouzdu2zf88bxd4e0lwjd8364myfremxchbvolbw3oedb472hvmzxs9chvfer3pffbxqsjryifmhe47kbtf3o86bpp5hrr8d0wszz1jf7kq8xcqn1kp914b4hkj4owtzlu1hna7y1h6mtjzj83izjyuq6jn2hpli30kdgf0fj7hu54q4cmm8lie277avi5mewyeflogj8u1bbs1bvdxerf22chtrd4u2mou168gsffenu5m81b 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 6676ea19xzgq0zddy1sf0no72rwt24fsvg09wawgovd323j37fzb2r8rankf9fwmx9kq9wcos0u92xkumbphwtcdqqh36ffey9vh99ky4yqjwwues8es2mmpb6qetba1tz0xn7zu0pfcvj779svnvidgfw6o717et31wed76nbj3qo3seds7va7u87vm82okwpj7j5i4wu1kirwyct0r05535kdpxcmbwf5yn2oodl1ywwlld4k3upg4vi0mtsgoe75fdtmq8sbu0ev694rii3ir8tfhs8a1cs7ynyqy5l6vg4bev68qamont6q188znfbnfsvm1383umo9bgi98qxrf68ayu7tnltfcvddn9bor2v50rw5rd46hmtjgqdwtdssv92f1j3gzpfrwc5sjcfsb7ef6qxidgxt1pfy8oy948x1qc27iqhywz0ks4x1c3i3zez87rg8cdch2a354b7joopb8rslrtrs26z242349dpt0zktjjq3n6e34xkcezvuon02fe65ikk7g80s9gl1msmem769po6k7ez2vrkqjajbnmkedqcoi2o16xqyprim3pvnf1xxsg81erpzknlj4dkyenezma8vjsuwk3fffkj9fn07ts4nw1ahjllpr54d04m7hy83q55aaev4zcvl9mu85e5mpo5je7bxx4cxrld3hxlveyrvt8uqabvw60jk2000tpzfiuizegunm2djniro9xf78rktsjwvm6xjrca7yprt6294zer5dtllcxjvij79jc8xtvw1ouzdu2zf88bxd4e0lwjd8364myfremxchbvolbw3oedb472hvmzxs9chvfer3pffbxqsjryifmhe47kbtf3o86bpp5hrr8d0wszz1jf7kq8xcqn1kp914b4hkj4owtzlu1hna7y1h6mtjzj83izjyuq6jn2hpli30kdgf0fj7hu54q4cmm8lie277avi5mewyeflogj8u1bbs1bvdxerf22chtrd4u2mou168gsffenu5m81b 00:07:25.164 16:22:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:25.423 [2024-07-15 16:22:10.743499] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:25.423 [2024-07-15 16:22:10.743765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63963 ] 00:07:25.423 [2024-07-15 16:22:10.876715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.683 [2024-07-15 16:22:10.986365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.683 [2024-07-15 16:22:11.039754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.817  Copying: 511/511 [MB] (average 1034 MBps) 00:07:26.817 00:07:26.817 16:22:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:26.817 16:22:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:26.817 16:22:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:26.817 16:22:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.817 [2024-07-15 16:22:12.202614] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:26.817 [2024-07-15 16:22:12.202709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63987 ] 00:07:26.817 { 00:07:26.817 "subsystems": [ 00:07:26.817 { 00:07:26.817 "subsystem": "bdev", 00:07:26.817 "config": [ 00:07:26.817 { 00:07:26.817 "params": { 00:07:26.817 "block_size": 512, 00:07:26.817 "num_blocks": 1048576, 00:07:26.817 "name": "malloc0" 00:07:26.817 }, 00:07:26.817 "method": "bdev_malloc_create" 00:07:26.817 }, 00:07:26.817 { 00:07:26.817 "params": { 00:07:26.817 "filename": "/dev/zram1", 00:07:26.817 "name": "uring0" 00:07:26.817 }, 00:07:26.817 "method": "bdev_uring_create" 00:07:26.817 }, 00:07:26.817 { 00:07:26.817 "method": "bdev_wait_for_examine" 00:07:26.817 } 00:07:26.817 ] 00:07:26.817 } 00:07:26.817 ] 00:07:26.817 } 00:07:26.817 [2024-07-15 16:22:12.344020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.075 [2024-07-15 16:22:12.464010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.075 [2024-07-15 16:22:12.520640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.941  Copying: 225/512 [MB] (225 MBps) Copying: 450/512 [MB] (224 MBps) Copying: 512/512 [MB] (average 225 MBps) 00:07:29.941 00:07:29.941 16:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:29.941 16:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:29.941 16:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:29.941 16:22:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.941 [2024-07-15 16:22:15.465477] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:29.941 [2024-07-15 16:22:15.466141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64031 ] 00:07:29.941 { 00:07:29.941 "subsystems": [ 00:07:29.941 { 00:07:29.941 "subsystem": "bdev", 00:07:29.941 "config": [ 00:07:29.941 { 00:07:29.941 "params": { 00:07:29.941 "block_size": 512, 00:07:29.941 "num_blocks": 1048576, 00:07:29.941 "name": "malloc0" 00:07:29.941 }, 00:07:29.941 "method": "bdev_malloc_create" 00:07:29.941 }, 00:07:29.941 { 00:07:29.941 "params": { 00:07:29.941 "filename": "/dev/zram1", 00:07:29.941 "name": "uring0" 00:07:29.941 }, 00:07:29.941 "method": "bdev_uring_create" 00:07:29.941 }, 00:07:29.941 { 00:07:29.941 "method": "bdev_wait_for_examine" 00:07:29.941 } 00:07:29.941 ] 00:07:29.941 } 00:07:29.941 ] 00:07:29.941 } 00:07:30.200 [2024-07-15 16:22:15.605789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.200 [2024-07-15 16:22:15.725224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.459 [2024-07-15 16:22:15.785780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.649  Copying: 183/512 [MB] (183 MBps) Copying: 360/512 [MB] (176 MBps) Copying: 512/512 [MB] (average 184 MBps) 00:07:33.649 00:07:33.649 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:33.649 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 6676ea19xzgq0zddy1sf0no72rwt24fsvg09wawgovd323j37fzb2r8rankf9fwmx9kq9wcos0u92xkumbphwtcdqqh36ffey9vh99ky4yqjwwues8es2mmpb6qetba1tz0xn7zu0pfcvj779svnvidgfw6o717et31wed76nbj3qo3seds7va7u87vm82okwpj7j5i4wu1kirwyct0r05535kdpxcmbwf5yn2oodl1ywwlld4k3upg4vi0mtsgoe75fdtmq8sbu0ev694rii3ir8tfhs8a1cs7ynyqy5l6vg4bev68qamont6q188znfbnfsvm1383umo9bgi98qxrf68ayu7tnltfcvddn9bor2v50rw5rd46hmtjgqdwtdssv92f1j3gzpfrwc5sjcfsb7ef6qxidgxt1pfy8oy948x1qc27iqhywz0ks4x1c3i3zez87rg8cdch2a354b7joopb8rslrtrs26z242349dpt0zktjjq3n6e34xkcezvuon02fe65ikk7g80s9gl1msmem769po6k7ez2vrkqjajbnmkedqcoi2o16xqyprim3pvnf1xxsg81erpzknlj4dkyenezma8vjsuwk3fffkj9fn07ts4nw1ahjllpr54d04m7hy83q55aaev4zcvl9mu85e5mpo5je7bxx4cxrld3hxlveyrvt8uqabvw60jk2000tpzfiuizegunm2djniro9xf78rktsjwvm6xjrca7yprt6294zer5dtllcxjvij79jc8xtvw1ouzdu2zf88bxd4e0lwjd8364myfremxchbvolbw3oedb472hvmzxs9chvfer3pffbxqsjryifmhe47kbtf3o86bpp5hrr8d0wszz1jf7kq8xcqn1kp914b4hkj4owtzlu1hna7y1h6mtjzj83izjyuq6jn2hpli30kdgf0fj7hu54q4cmm8lie277avi5mewyeflogj8u1bbs1bvdxerf22chtrd4u2mou168gsffenu5m81b == \6\6\7\6\e\a\1\9\x\z\g\q\0\z\d\d\y\1\s\f\0\n\o\7\2\r\w\t\2\4\f\s\v\g\0\9\w\a\w\g\o\v\d\3\2\3\j\3\7\f\z\b\2\r\8\r\a\n\k\f\9\f\w\m\x\9\k\q\9\w\c\o\s\0\u\9\2\x\k\u\m\b\p\h\w\t\c\d\q\q\h\3\6\f\f\e\y\9\v\h\9\9\k\y\4\y\q\j\w\w\u\e\s\8\e\s\2\m\m\p\b\6\q\e\t\b\a\1\t\z\0\x\n\7\z\u\0\p\f\c\v\j\7\7\9\s\v\n\v\i\d\g\f\w\6\o\7\1\7\e\t\3\1\w\e\d\7\6\n\b\j\3\q\o\3\s\e\d\s\7\v\a\7\u\8\7\v\m\8\2\o\k\w\p\j\7\j\5\i\4\w\u\1\k\i\r\w\y\c\t\0\r\0\5\5\3\5\k\d\p\x\c\m\b\w\f\5\y\n\2\o\o\d\l\1\y\w\w\l\l\d\4\k\3\u\p\g\4\v\i\0\m\t\s\g\o\e\7\5\f\d\t\m\q\8\s\b\u\0\e\v\6\9\4\r\i\i\3\i\r\8\t\f\h\s\8\a\1\c\s\7\y\n\y\q\y\5\l\6\v\g\4\b\e\v\6\8\q\a\m\o\n\t\6\q\1\8\8\z\n\f\b\n\f\s\v\m\1\3\8\3\u\m\o\9\b\g\i\9\8\q\x\r\f\6\8\a\y\u\7\t\n\l\t\f\c\v\d\d\n\9\b\o\r\2\v\5\0\r\w\5\r\d\4\6\h\m\t\j\g\q\d\w\t\d\s\s\v\9\2\f\1\j\3\g\z\p\f\r\w\c\5\s\j\c\f\s\b\7\e\f\6\q\x\i\d\g\x\t\1\p\f\y\8\o\y\9\4\8\x\1\q\c\2\7\i\q\h\y\w\z\0\k\s\4\x\1\c\3\i\3\z\e\z\8\7\r\g\8\c\d\c\h\2\a\3\5\4\b\7\j\o\o\p\b\8\r\s\l\r\t\r\s\2\6\z\2\4\2\3\4\9\d\p\t\0\z\k\t\j\j\q\3\n\6\e\3\4\x\k\c\e\z\v\u\o\n\0\2\f\e\6\5\i\k\k\7\g\8\0\s\9\g\l\1\m\s\m\e\m\7\6\9\p\o\6\k\7\e\z\2\v\r\k\q\j\a\j\b\n\m\k\e\d\q\c\o\i\2\o\1\6\x\q\y\p\r\i\m\3\p\v\n\f\1\x\x\s\g\8\1\e\r\p\z\k\n\l\j\4\d\k\y\e\n\e\z\m\a\8\v\j\s\u\w\k\3\f\f\f\k\j\9\f\n\0\7\t\s\4\n\w\1\a\h\j\l\l\p\r\5\4\d\0\4\m\7\h\y\8\3\q\5\5\a\a\e\v\4\z\c\v\l\9\m\u\8\5\e\5\m\p\o\5\j\e\7\b\x\x\4\c\x\r\l\d\3\h\x\l\v\e\y\r\v\t\8\u\q\a\b\v\w\6\0\j\k\2\0\0\0\t\p\z\f\i\u\i\z\e\g\u\n\m\2\d\j\n\i\r\o\9\x\f\7\8\r\k\t\s\j\w\v\m\6\x\j\r\c\a\7\y\p\r\t\6\2\9\4\z\e\r\5\d\t\l\l\c\x\j\v\i\j\7\9\j\c\8\x\t\v\w\1\o\u\z\d\u\2\z\f\8\8\b\x\d\4\e\0\l\w\j\d\8\3\6\4\m\y\f\r\e\m\x\c\h\b\v\o\l\b\w\3\o\e\d\b\4\7\2\h\v\m\z\x\s\9\c\h\v\f\e\r\3\p\f\f\b\x\q\s\j\r\y\i\f\m\h\e\4\7\k\b\t\f\3\o\8\6\b\p\p\5\h\r\r\8\d\0\w\s\z\z\1\j\f\7\k\q\8\x\c\q\n\1\k\p\9\1\4\b\4\h\k\j\4\o\w\t\z\l\u\1\h\n\a\7\y\1\h\6\m\t\j\z\j\8\3\i\z\j\y\u\q\6\j\n\2\h\p\l\i\3\0\k\d\g\f\0\f\j\7\h\u\5\4\q\4\c\m\m\8\l\i\e\2\7\7\a\v\i\5\m\e\w\y\e\f\l\o\g\j\8\u\1\b\b\s\1\b\v\d\x\e\r\f\2\2\c\h\t\r\d\4\u\2\m\o\u\1\6\8\g\s\f\f\e\n\u\5\m\8\1\b ]] 00:07:33.649 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:33.650 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 6676ea19xzgq0zddy1sf0no72rwt24fsvg09wawgovd323j37fzb2r8rankf9fwmx9kq9wcos0u92xkumbphwtcdqqh36ffey9vh99ky4yqjwwues8es2mmpb6qetba1tz0xn7zu0pfcvj779svnvidgfw6o717et31wed76nbj3qo3seds7va7u87vm82okwpj7j5i4wu1kirwyct0r05535kdpxcmbwf5yn2oodl1ywwlld4k3upg4vi0mtsgoe75fdtmq8sbu0ev694rii3ir8tfhs8a1cs7ynyqy5l6vg4bev68qamont6q188znfbnfsvm1383umo9bgi98qxrf68ayu7tnltfcvddn9bor2v50rw5rd46hmtjgqdwtdssv92f1j3gzpfrwc5sjcfsb7ef6qxidgxt1pfy8oy948x1qc27iqhywz0ks4x1c3i3zez87rg8cdch2a354b7joopb8rslrtrs26z242349dpt0zktjjq3n6e34xkcezvuon02fe65ikk7g80s9gl1msmem769po6k7ez2vrkqjajbnmkedqcoi2o16xqyprim3pvnf1xxsg81erpzknlj4dkyenezma8vjsuwk3fffkj9fn07ts4nw1ahjllpr54d04m7hy83q55aaev4zcvl9mu85e5mpo5je7bxx4cxrld3hxlveyrvt8uqabvw60jk2000tpzfiuizegunm2djniro9xf78rktsjwvm6xjrca7yprt6294zer5dtllcxjvij79jc8xtvw1ouzdu2zf88bxd4e0lwjd8364myfremxchbvolbw3oedb472hvmzxs9chvfer3pffbxqsjryifmhe47kbtf3o86bpp5hrr8d0wszz1jf7kq8xcqn1kp914b4hkj4owtzlu1hna7y1h6mtjzj83izjyuq6jn2hpli30kdgf0fj7hu54q4cmm8lie277avi5mewyeflogj8u1bbs1bvdxerf22chtrd4u2mou168gsffenu5m81b == \6\6\7\6\e\a\1\9\x\z\g\q\0\z\d\d\y\1\s\f\0\n\o\7\2\r\w\t\2\4\f\s\v\g\0\9\w\a\w\g\o\v\d\3\2\3\j\3\7\f\z\b\2\r\8\r\a\n\k\f\9\f\w\m\x\9\k\q\9\w\c\o\s\0\u\9\2\x\k\u\m\b\p\h\w\t\c\d\q\q\h\3\6\f\f\e\y\9\v\h\9\9\k\y\4\y\q\j\w\w\u\e\s\8\e\s\2\m\m\p\b\6\q\e\t\b\a\1\t\z\0\x\n\7\z\u\0\p\f\c\v\j\7\7\9\s\v\n\v\i\d\g\f\w\6\o\7\1\7\e\t\3\1\w\e\d\7\6\n\b\j\3\q\o\3\s\e\d\s\7\v\a\7\u\8\7\v\m\8\2\o\k\w\p\j\7\j\5\i\4\w\u\1\k\i\r\w\y\c\t\0\r\0\5\5\3\5\k\d\p\x\c\m\b\w\f\5\y\n\2\o\o\d\l\1\y\w\w\l\l\d\4\k\3\u\p\g\4\v\i\0\m\t\s\g\o\e\7\5\f\d\t\m\q\8\s\b\u\0\e\v\6\9\4\r\i\i\3\i\r\8\t\f\h\s\8\a\1\c\s\7\y\n\y\q\y\5\l\6\v\g\4\b\e\v\6\8\q\a\m\o\n\t\6\q\1\8\8\z\n\f\b\n\f\s\v\m\1\3\8\3\u\m\o\9\b\g\i\9\8\q\x\r\f\6\8\a\y\u\7\t\n\l\t\f\c\v\d\d\n\9\b\o\r\2\v\5\0\r\w\5\r\d\4\6\h\m\t\j\g\q\d\w\t\d\s\s\v\9\2\f\1\j\3\g\z\p\f\r\w\c\5\s\j\c\f\s\b\7\e\f\6\q\x\i\d\g\x\t\1\p\f\y\8\o\y\9\4\8\x\1\q\c\2\7\i\q\h\y\w\z\0\k\s\4\x\1\c\3\i\3\z\e\z\8\7\r\g\8\c\d\c\h\2\a\3\5\4\b\7\j\o\o\p\b\8\r\s\l\r\t\r\s\2\6\z\2\4\2\3\4\9\d\p\t\0\z\k\t\j\j\q\3\n\6\e\3\4\x\k\c\e\z\v\u\o\n\0\2\f\e\6\5\i\k\k\7\g\8\0\s\9\g\l\1\m\s\m\e\m\7\6\9\p\o\6\k\7\e\z\2\v\r\k\q\j\a\j\b\n\m\k\e\d\q\c\o\i\2\o\1\6\x\q\y\p\r\i\m\3\p\v\n\f\1\x\x\s\g\8\1\e\r\p\z\k\n\l\j\4\d\k\y\e\n\e\z\m\a\8\v\j\s\u\w\k\3\f\f\f\k\j\9\f\n\0\7\t\s\4\n\w\1\a\h\j\l\l\p\r\5\4\d\0\4\m\7\h\y\8\3\q\5\5\a\a\e\v\4\z\c\v\l\9\m\u\8\5\e\5\m\p\o\5\j\e\7\b\x\x\4\c\x\r\l\d\3\h\x\l\v\e\y\r\v\t\8\u\q\a\b\v\w\6\0\j\k\2\0\0\0\t\p\z\f\i\u\i\z\e\g\u\n\m\2\d\j\n\i\r\o\9\x\f\7\8\r\k\t\s\j\w\v\m\6\x\j\r\c\a\7\y\p\r\t\6\2\9\4\z\e\r\5\d\t\l\l\c\x\j\v\i\j\7\9\j\c\8\x\t\v\w\1\o\u\z\d\u\2\z\f\8\8\b\x\d\4\e\0\l\w\j\d\8\3\6\4\m\y\f\r\e\m\x\c\h\b\v\o\l\b\w\3\o\e\d\b\4\7\2\h\v\m\z\x\s\9\c\h\v\f\e\r\3\p\f\f\b\x\q\s\j\r\y\i\f\m\h\e\4\7\k\b\t\f\3\o\8\6\b\p\p\5\h\r\r\8\d\0\w\s\z\z\1\j\f\7\k\q\8\x\c\q\n\1\k\p\9\1\4\b\4\h\k\j\4\o\w\t\z\l\u\1\h\n\a\7\y\1\h\6\m\t\j\z\j\8\3\i\z\j\y\u\q\6\j\n\2\h\p\l\i\3\0\k\d\g\f\0\f\j\7\h\u\5\4\q\4\c\m\m\8\l\i\e\2\7\7\a\v\i\5\m\e\w\y\e\f\l\o\g\j\8\u\1\b\b\s\1\b\v\d\x\e\r\f\2\2\c\h\t\r\d\4\u\2\m\o\u\1\6\8\g\s\f\f\e\n\u\5\m\8\1\b ]] 00:07:33.650 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:34.216 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:34.216 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:34.216 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:34.216 16:22:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:34.216 { 00:07:34.216 "subsystems": [ 00:07:34.216 { 00:07:34.216 "subsystem": "bdev", 00:07:34.216 "config": [ 00:07:34.216 { 00:07:34.216 "params": { 00:07:34.216 "block_size": 512, 00:07:34.216 "num_blocks": 1048576, 00:07:34.216 "name": "malloc0" 00:07:34.216 }, 00:07:34.216 "method": "bdev_malloc_create" 00:07:34.216 }, 00:07:34.216 { 00:07:34.216 "params": { 00:07:34.216 "filename": "/dev/zram1", 00:07:34.216 "name": "uring0" 00:07:34.216 }, 00:07:34.216 "method": "bdev_uring_create" 00:07:34.216 }, 00:07:34.216 { 00:07:34.216 "method": "bdev_wait_for_examine" 00:07:34.216 } 00:07:34.216 ] 00:07:34.216 } 00:07:34.216 ] 00:07:34.216 } 00:07:34.216 [2024-07-15 16:22:19.662639] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:34.216 [2024-07-15 16:22:19.662779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64104 ] 00:07:34.475 [2024-07-15 16:22:19.808055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.475 [2024-07-15 16:22:19.918547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.475 [2024-07-15 16:22:19.973445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.548  Copying: 147/512 [MB] (147 MBps) Copying: 296/512 [MB] (149 MBps) Copying: 448/512 [MB] (151 MBps) Copying: 512/512 [MB] (average 148 MBps) 00:07:38.548 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:38.548 16:22:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.548 [2024-07-15 16:22:24.093777] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:38.548 [2024-07-15 16:22:24.093884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64166 ] 00:07:38.805 { 00:07:38.805 "subsystems": [ 00:07:38.805 { 00:07:38.805 "subsystem": "bdev", 00:07:38.805 "config": [ 00:07:38.805 { 00:07:38.805 "params": { 00:07:38.805 "block_size": 512, 00:07:38.805 "num_blocks": 1048576, 00:07:38.805 "name": "malloc0" 00:07:38.805 }, 00:07:38.805 "method": "bdev_malloc_create" 00:07:38.805 }, 00:07:38.805 { 00:07:38.805 "params": { 00:07:38.805 "filename": "/dev/zram1", 00:07:38.805 "name": "uring0" 00:07:38.805 }, 00:07:38.805 "method": "bdev_uring_create" 00:07:38.805 }, 00:07:38.805 { 00:07:38.805 "params": { 00:07:38.805 "name": "uring0" 00:07:38.805 }, 00:07:38.805 "method": "bdev_uring_delete" 00:07:38.805 }, 00:07:38.805 { 00:07:38.805 "method": "bdev_wait_for_examine" 00:07:38.805 } 00:07:38.805 ] 00:07:38.805 } 00:07:38.805 ] 00:07:38.805 } 00:07:38.805 [2024-07-15 16:22:24.229400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.805 [2024-07-15 16:22:24.327797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.062 [2024-07-15 16:22:24.382094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.627  Copying: 0/0 [B] (average 0 Bps) 00:07:39.627 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.627 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:39.627 [2024-07-15 16:22:25.063463] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:39.627 [2024-07-15 16:22:25.064120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64196 ] 00:07:39.627 { 00:07:39.627 "subsystems": [ 00:07:39.627 { 00:07:39.627 "subsystem": "bdev", 00:07:39.627 "config": [ 00:07:39.627 { 00:07:39.627 "params": { 00:07:39.627 "block_size": 512, 00:07:39.627 "num_blocks": 1048576, 00:07:39.627 "name": "malloc0" 00:07:39.627 }, 00:07:39.627 "method": "bdev_malloc_create" 00:07:39.627 }, 00:07:39.627 { 00:07:39.627 "params": { 00:07:39.627 "filename": "/dev/zram1", 00:07:39.627 "name": "uring0" 00:07:39.627 }, 00:07:39.627 "method": "bdev_uring_create" 00:07:39.627 }, 00:07:39.627 { 00:07:39.627 "params": { 00:07:39.627 "name": "uring0" 00:07:39.627 }, 00:07:39.627 "method": "bdev_uring_delete" 00:07:39.627 }, 00:07:39.627 { 00:07:39.627 "method": "bdev_wait_for_examine" 00:07:39.627 } 00:07:39.627 ] 00:07:39.627 } 00:07:39.627 ] 00:07:39.627 } 00:07:39.885 [2024-07-15 16:22:25.197295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.885 [2024-07-15 16:22:25.308855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.885 [2024-07-15 16:22:25.364020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.142 [2024-07-15 16:22:25.569973] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:40.142 [2024-07-15 16:22:25.570025] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:40.142 [2024-07-15 16:22:25.570036] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:40.142 [2024-07-15 16:22:25.570046] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.401 [2024-07-15 16:22:25.892780] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:40.659 16:22:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:40.659 16:22:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:40.918 00:07:40.918 real 0m15.560s 00:07:40.918 user 0m10.591s 00:07:40.918 sys 0m12.458s 00:07:40.918 16:22:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.918 16:22:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:40.918 ************************************ 00:07:40.918 END TEST dd_uring_copy 00:07:40.918 ************************************ 00:07:40.918 16:22:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:40.918 ************************************ 00:07:40.918 END TEST spdk_dd_uring 00:07:40.918 ************************************ 00:07:40.918 00:07:40.918 real 0m15.710s 00:07:40.918 user 0m10.649s 00:07:40.918 sys 0m12.547s 00:07:40.918 16:22:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.918 16:22:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:40.918 16:22:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:40.918 16:22:26 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:40.918 16:22:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.918 16:22:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.918 16:22:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:40.918 ************************************ 00:07:40.918 START TEST spdk_dd_sparse 00:07:40.918 ************************************ 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:40.918 * Looking for test storage... 00:07:40.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:40.918 1+0 records in 00:07:40.918 1+0 records out 00:07:40.918 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00694455 s, 604 MB/s 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:40.918 1+0 records in 00:07:40.918 1+0 records out 00:07:40.918 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00622472 s, 674 MB/s 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:40.918 1+0 records in 00:07:40.918 1+0 records out 00:07:40.918 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00632877 s, 663 MB/s 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:40.918 ************************************ 00:07:40.918 START TEST dd_sparse_file_to_file 00:07:40.918 ************************************ 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:40.918 16:22:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:41.178 [2024-07-15 16:22:26.512240] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:41.178 [2024-07-15 16:22:26.512828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64281 ] 00:07:41.178 { 00:07:41.178 "subsystems": [ 00:07:41.178 { 00:07:41.178 "subsystem": "bdev", 00:07:41.178 "config": [ 00:07:41.178 { 00:07:41.178 "params": { 00:07:41.178 "block_size": 4096, 00:07:41.178 "filename": "dd_sparse_aio_disk", 00:07:41.178 "name": "dd_aio" 00:07:41.178 }, 00:07:41.178 "method": "bdev_aio_create" 00:07:41.178 }, 00:07:41.178 { 00:07:41.178 "params": { 00:07:41.178 "lvs_name": "dd_lvstore", 00:07:41.178 "bdev_name": "dd_aio" 00:07:41.178 }, 00:07:41.178 "method": "bdev_lvol_create_lvstore" 00:07:41.178 }, 00:07:41.178 { 00:07:41.178 "method": "bdev_wait_for_examine" 00:07:41.178 } 00:07:41.178 ] 00:07:41.178 } 00:07:41.178 ] 00:07:41.178 } 00:07:41.178 [2024-07-15 16:22:26.650487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.436 [2024-07-15 16:22:26.780477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.436 [2024-07-15 16:22:26.840186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.722  Copying: 12/36 [MB] (average 1333 MBps) 00:07:41.722 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:41.722 00:07:41.722 real 0m0.757s 00:07:41.722 user 0m0.488s 00:07:41.722 sys 0m0.369s 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:41.722 ************************************ 00:07:41.722 END TEST dd_sparse_file_to_file 00:07:41.722 ************************************ 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:41.722 ************************************ 00:07:41.722 START TEST dd_sparse_file_to_bdev 00:07:41.722 ************************************ 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:41.722 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:41.723 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:41.723 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:41.723 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:41.997 [2024-07-15 16:22:27.316800] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:41.997 { 00:07:41.997 "subsystems": [ 00:07:41.997 { 00:07:41.997 "subsystem": "bdev", 00:07:41.997 "config": [ 00:07:41.997 { 00:07:41.997 "params": { 00:07:41.997 "block_size": 4096, 00:07:41.997 "filename": "dd_sparse_aio_disk", 00:07:41.997 "name": "dd_aio" 00:07:41.997 }, 00:07:41.997 "method": "bdev_aio_create" 00:07:41.997 }, 00:07:41.997 { 00:07:41.997 "params": { 00:07:41.997 "lvs_name": "dd_lvstore", 00:07:41.997 "lvol_name": "dd_lvol", 00:07:41.997 "size_in_mib": 36, 00:07:41.997 "thin_provision": true 00:07:41.997 }, 00:07:41.997 "method": "bdev_lvol_create" 00:07:41.997 }, 00:07:41.997 { 00:07:41.997 "method": "bdev_wait_for_examine" 00:07:41.997 } 00:07:41.997 ] 00:07:41.997 } 00:07:41.997 ] 00:07:41.997 } 00:07:41.997 [2024-07-15 16:22:27.317379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64329 ] 00:07:41.997 [2024-07-15 16:22:27.457915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.256 [2024-07-15 16:22:27.563136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.256 [2024-07-15 16:22:27.621587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.514  Copying: 12/36 [MB] (average 480 MBps) 00:07:42.514 00:07:42.514 00:07:42.514 real 0m0.695s 00:07:42.514 user 0m0.445s 00:07:42.514 sys 0m0.353s 00:07:42.514 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.514 ************************************ 00:07:42.514 END TEST dd_sparse_file_to_bdev 00:07:42.514 ************************************ 00:07:42.514 16:22:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.514 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:42.514 16:22:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:42.514 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.514 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.514 16:22:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:42.514 ************************************ 00:07:42.514 START TEST dd_sparse_bdev_to_file 00:07:42.514 ************************************ 00:07:42.514 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:42.514 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:42.514 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:42.514 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:42.514 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:42.515 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:42.515 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:42.515 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:42.515 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:42.515 [2024-07-15 16:22:28.059266] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:42.515 [2024-07-15 16:22:28.059938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64361 ] 00:07:42.774 { 00:07:42.774 "subsystems": [ 00:07:42.774 { 00:07:42.774 "subsystem": "bdev", 00:07:42.774 "config": [ 00:07:42.774 { 00:07:42.774 "params": { 00:07:42.774 "block_size": 4096, 00:07:42.774 "filename": "dd_sparse_aio_disk", 00:07:42.774 "name": "dd_aio" 00:07:42.774 }, 00:07:42.774 "method": "bdev_aio_create" 00:07:42.774 }, 00:07:42.774 { 00:07:42.774 "method": "bdev_wait_for_examine" 00:07:42.774 } 00:07:42.774 ] 00:07:42.774 } 00:07:42.774 ] 00:07:42.774 } 00:07:42.774 [2024-07-15 16:22:28.199919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.774 [2024-07-15 16:22:28.310227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.033 [2024-07-15 16:22:28.369604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.292  Copying: 12/36 [MB] (average 923 MBps) 00:07:43.292 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:43.292 00:07:43.292 real 0m0.715s 00:07:43.292 user 0m0.466s 00:07:43.292 sys 0m0.350s 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.292 ************************************ 00:07:43.292 END TEST dd_sparse_bdev_to_file 00:07:43.292 ************************************ 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:43.292 00:07:43.292 real 0m2.461s 00:07:43.292 user 0m1.497s 00:07:43.292 sys 0m1.264s 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.292 16:22:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:43.292 ************************************ 00:07:43.292 END TEST spdk_dd_sparse 00:07:43.292 ************************************ 00:07:43.292 16:22:28 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:43.292 16:22:28 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:43.292 16:22:28 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.292 16:22:28 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.292 16:22:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:43.292 ************************************ 00:07:43.292 START TEST spdk_dd_negative 00:07:43.292 ************************************ 00:07:43.292 16:22:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:43.551 * Looking for test storage... 00:07:43.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.551 16:22:28 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.551 16:22:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.551 16:22:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.551 16:22:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.552 ************************************ 00:07:43.552 START TEST dd_invalid_arguments 00:07:43.552 ************************************ 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.552 16:22:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:43.552 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:43.552 00:07:43.552 CPU options: 00:07:43.552 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:43.552 (like [0,1,10]) 00:07:43.552 --lcores lcore to CPU mapping list. The list is in the format: 00:07:43.552 [<,lcores[@CPUs]>...] 00:07:43.552 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:43.552 Within the group, '-' is used for range separator, 00:07:43.552 ',' is used for single number separator. 00:07:43.552 '( )' can be omitted for single element group, 00:07:43.552 '@' can be omitted if cpus and lcores have the same value 00:07:43.552 --disable-cpumask-locks Disable CPU core lock files. 00:07:43.552 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:43.552 pollers in the app support interrupt mode) 00:07:43.552 -p, --main-core main (primary) core for DPDK 00:07:43.552 00:07:43.552 Configuration options: 00:07:43.552 -c, --config, --json JSON config file 00:07:43.552 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:43.552 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:43.552 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:43.552 --rpcs-allowed comma-separated list of permitted RPCS 00:07:43.552 --json-ignore-init-errors don't exit on invalid config entry 00:07:43.552 00:07:43.552 Memory options: 00:07:43.552 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:43.552 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:43.552 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:43.552 -R, --huge-unlink unlink huge files after initialization 00:07:43.552 -n, --mem-channels number of memory channels used for DPDK 00:07:43.552 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:43.552 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:43.552 --no-huge run without using hugepages 00:07:43.552 -i, --shm-id shared memory ID (optional) 00:07:43.552 -g, --single-file-segments force creating just one hugetlbfs file 00:07:43.552 00:07:43.552 PCI options: 00:07:43.552 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:43.552 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:43.552 -u, --no-pci disable PCI access 00:07:43.552 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:43.552 00:07:43.552 Log options: 00:07:43.552 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:43.552 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:43.552 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:43.552 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:43.552 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:43.552 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:43.552 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:43.553 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:43.553 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:43.553 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:43.553 virtio_vfio_user, vmd) 00:07:43.553 --silence-noticelog disable notice level logging to stderr 00:07:43.553 00:07:43.553 Trace options: 00:07:43.553 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:43.553 setting 0 to disable trace (default 32768) 00:07:43.553 Tracepoints vary in size and can use more than one trace entry. 00:07:43.553 -e, --tpoint-group [:] 00:07:43.553 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:43.553 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:43.553 [2024-07-15 16:22:29.001848] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:43.553 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:43.553 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:43.553 a tracepoint group. First tpoint inside a group can be enabled by 00:07:43.553 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:43.553 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:43.553 in /include/spdk_internal/trace_defs.h 00:07:43.553 00:07:43.553 Other options: 00:07:43.553 -h, --help show this usage 00:07:43.553 -v, --version print SPDK version 00:07:43.553 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:43.553 --env-context Opaque context for use of the env implementation 00:07:43.553 00:07:43.553 Application specific: 00:07:43.553 [--------- DD Options ---------] 00:07:43.553 --if Input file. Must specify either --if or --ib. 00:07:43.553 --ib Input bdev. Must specifier either --if or --ib 00:07:43.553 --of Output file. Must specify either --of or --ob. 00:07:43.553 --ob Output bdev. Must specify either --of or --ob. 00:07:43.553 --iflag Input file flags. 00:07:43.553 --oflag Output file flags. 00:07:43.553 --bs I/O unit size (default: 4096) 00:07:43.553 --qd Queue depth (default: 2) 00:07:43.553 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:43.553 --skip Skip this many I/O units at start of input. (default: 0) 00:07:43.553 --seek Skip this many I/O units at start of output. (default: 0) 00:07:43.553 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:43.553 --sparse Enable hole skipping in input target 00:07:43.553 Available iflag and oflag values: 00:07:43.553 append - append mode 00:07:43.553 direct - use direct I/O for data 00:07:43.553 directory - fail unless a directory 00:07:43.553 dsync - use synchronized I/O for data 00:07:43.553 noatime - do not update access time 00:07:43.553 noctty - do not assign controlling terminal from file 00:07:43.553 nofollow - do not follow symlinks 00:07:43.553 nonblock - use non-blocking I/O 00:07:43.553 sync - use synchronized I/O for data and metadata 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.553 00:07:43.553 real 0m0.074s 00:07:43.553 user 0m0.038s 00:07:43.553 sys 0m0.035s 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:43.553 ************************************ 00:07:43.553 END TEST dd_invalid_arguments 00:07:43.553 ************************************ 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.553 ************************************ 00:07:43.553 START TEST dd_double_input 00:07:43.553 ************************************ 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.553 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:43.812 [2024-07-15 16:22:29.131791] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.812 00:07:43.812 real 0m0.077s 00:07:43.812 user 0m0.048s 00:07:43.812 sys 0m0.028s 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:43.812 ************************************ 00:07:43.812 END TEST dd_double_input 00:07:43.812 ************************************ 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.812 ************************************ 00:07:43.812 START TEST dd_double_output 00:07:43.812 ************************************ 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.812 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:43.813 [2024-07-15 16:22:29.247902] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.813 00:07:43.813 real 0m0.059s 00:07:43.813 user 0m0.039s 00:07:43.813 sys 0m0.019s 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:43.813 ************************************ 00:07:43.813 END TEST dd_double_output 00:07:43.813 ************************************ 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.813 ************************************ 00:07:43.813 START TEST dd_no_input 00:07:43.813 ************************************ 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.813 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:44.071 [2024-07-15 16:22:29.376745] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.071 00:07:44.071 real 0m0.077s 00:07:44.071 user 0m0.055s 00:07:44.071 sys 0m0.020s 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:44.071 ************************************ 00:07:44.071 END TEST dd_no_input 00:07:44.071 ************************************ 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.071 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.071 ************************************ 00:07:44.071 START TEST dd_no_output 00:07:44.072 ************************************ 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.072 [2024-07-15 16:22:29.510498] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.072 00:07:44.072 real 0m0.077s 00:07:44.072 user 0m0.047s 00:07:44.072 sys 0m0.028s 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:44.072 ************************************ 00:07:44.072 END TEST dd_no_output 00:07:44.072 ************************************ 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.072 ************************************ 00:07:44.072 START TEST dd_wrong_blocksize 00:07:44.072 ************************************ 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.072 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:44.331 [2024-07-15 16:22:29.645582] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.331 00:07:44.331 real 0m0.080s 00:07:44.331 user 0m0.044s 00:07:44.331 sys 0m0.034s 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:44.331 ************************************ 00:07:44.331 END TEST dd_wrong_blocksize 00:07:44.331 ************************************ 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.331 ************************************ 00:07:44.331 START TEST dd_smaller_blocksize 00:07:44.331 ************************************ 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.331 16:22:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:44.331 [2024-07-15 16:22:29.783849] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:44.331 [2024-07-15 16:22:29.783977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64584 ] 00:07:44.590 [2024-07-15 16:22:29.927379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.590 [2024-07-15 16:22:30.050726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.590 [2024-07-15 16:22:30.109383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.157 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:45.157 [2024-07-15 16:22:30.425235] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:45.157 [2024-07-15 16:22:30.425342] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.157 [2024-07-15 16:22:30.543984] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.157 00:07:45.157 real 0m0.923s 00:07:45.157 user 0m0.436s 00:07:45.157 sys 0m0.379s 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:45.157 ************************************ 00:07:45.157 END TEST dd_smaller_blocksize 00:07:45.157 ************************************ 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.157 ************************************ 00:07:45.157 START TEST dd_invalid_count 00:07:45.157 ************************************ 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.157 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.158 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.416 [2024-07-15 16:22:30.757797] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.416 00:07:45.416 real 0m0.078s 00:07:45.416 user 0m0.048s 00:07:45.416 sys 0m0.028s 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:45.416 ************************************ 00:07:45.416 END TEST dd_invalid_count 00:07:45.416 ************************************ 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.416 ************************************ 00:07:45.416 START TEST dd_invalid_oflag 00:07:45.416 ************************************ 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.416 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:45.417 [2024-07-15 16:22:30.892660] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.417 00:07:45.417 real 0m0.082s 00:07:45.417 user 0m0.047s 00:07:45.417 sys 0m0.033s 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:45.417 ************************************ 00:07:45.417 END TEST dd_invalid_oflag 00:07:45.417 ************************************ 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.417 ************************************ 00:07:45.417 START TEST dd_invalid_iflag 00:07:45.417 ************************************ 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:45.417 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.676 16:22:30 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:45.676 [2024-07-15 16:22:31.021225] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.676 00:07:45.676 real 0m0.075s 00:07:45.676 user 0m0.049s 00:07:45.676 sys 0m0.025s 00:07:45.676 ************************************ 00:07:45.676 END TEST dd_invalid_iflag 00:07:45.676 ************************************ 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.676 ************************************ 00:07:45.676 START TEST dd_unknown_flag 00:07:45.676 ************************************ 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.676 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.677 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.677 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.677 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.677 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.677 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.677 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.677 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:45.677 [2024-07-15 16:22:31.151584] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:45.677 [2024-07-15 16:22:31.151684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64677 ] 00:07:45.936 [2024-07-15 16:22:31.290322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.936 [2024-07-15 16:22:31.403132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.936 [2024-07-15 16:22:31.459167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.195 [2024-07-15 16:22:31.492551] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:46.195 [2024-07-15 16:22:31.492608] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.195 [2024-07-15 16:22:31.492665] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:46.195 [2024-07-15 16:22:31.492679] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.195 [2024-07-15 16:22:31.492923] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:46.195 [2024-07-15 16:22:31.492940] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.195 [2024-07-15 16:22:31.492988] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:46.195 [2024-07-15 16:22:31.492999] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:46.195 [2024-07-15 16:22:31.609490] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.195 ************************************ 00:07:46.195 END TEST dd_unknown_flag 00:07:46.195 ************************************ 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.195 00:07:46.195 real 0m0.613s 00:07:46.195 user 0m0.353s 00:07:46.195 sys 0m0.170s 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.195 16:22:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.455 ************************************ 00:07:46.455 START TEST dd_invalid_json 00:07:46.455 ************************************ 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.455 16:22:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.455 [2024-07-15 16:22:31.816754] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:46.455 [2024-07-15 16:22:31.816880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64706 ] 00:07:46.455 [2024-07-15 16:22:31.955218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.713 [2024-07-15 16:22:32.067446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.713 [2024-07-15 16:22:32.067519] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:46.713 [2024-07-15 16:22:32.067536] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.713 [2024-07-15 16:22:32.067546] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.713 [2024-07-15 16:22:32.067583] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.713 00:07:46.713 real 0m0.419s 00:07:46.713 user 0m0.238s 00:07:46.713 sys 0m0.079s 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:46.713 ************************************ 00:07:46.713 END TEST dd_invalid_json 00:07:46.713 ************************************ 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:46.713 00:07:46.713 real 0m3.374s 00:07:46.713 user 0m1.676s 00:07:46.713 sys 0m1.330s 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.713 16:22:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.713 ************************************ 00:07:46.713 END TEST spdk_dd_negative 00:07:46.713 ************************************ 00:07:46.714 16:22:32 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:46.714 00:07:46.714 real 1m18.814s 00:07:46.714 user 0m51.443s 00:07:46.714 sys 0m33.647s 00:07:46.714 16:22:32 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.714 ************************************ 00:07:46.714 END TEST spdk_dd 00:07:46.714 ************************************ 00:07:46.714 16:22:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.972 16:22:32 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.972 16:22:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:46.972 16:22:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:46.972 16:22:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:46.972 16:22:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.972 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:46.972 16:22:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:46.972 16:22:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:46.972 16:22:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:46.972 16:22:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:46.972 16:22:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:46.972 16:22:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:46.972 16:22:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.972 16:22:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:46.972 16:22:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.972 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:46.972 ************************************ 00:07:46.972 START TEST nvmf_tcp 00:07:46.972 ************************************ 00:07:46.972 16:22:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.972 * Looking for test storage... 00:07:46.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.972 16:22:32 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.972 16:22:32 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.972 16:22:32 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.972 16:22:32 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.972 16:22:32 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.972 16:22:32 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.972 16:22:32 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:46.972 16:22:32 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:46.972 16:22:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.972 16:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:46.972 16:22:32 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:46.973 16:22:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:46.973 16:22:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.973 16:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.973 ************************************ 00:07:46.973 START TEST nvmf_host_management 00:07:46.973 ************************************ 00:07:46.973 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.232 * Looking for test storage... 00:07:47.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.232 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:47.233 Cannot find device "nvmf_init_br" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:47.233 Cannot find device "nvmf_tgt_br" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.233 Cannot find device "nvmf_tgt_br2" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:47.233 Cannot find device "nvmf_init_br" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:47.233 Cannot find device "nvmf_tgt_br" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:47.233 Cannot find device "nvmf_tgt_br2" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:47.233 Cannot find device "nvmf_br" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:47.233 Cannot find device "nvmf_init_if" 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.233 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.491 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.491 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:47.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:07:47.492 00:07:47.492 --- 10.0.0.2 ping statistics --- 00:07:47.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.492 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:47.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:47.492 00:07:47.492 --- 10.0.0.3 ping statistics --- 00:07:47.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.492 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:07:47.492 00:07:47.492 --- 10.0.0.1 ping statistics --- 00:07:47.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.492 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64967 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64967 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64967 ']' 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.492 16:22:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.492 [2024-07-15 16:22:33.027576] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:47.492 [2024-07-15 16:22:33.027690] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.751 [2024-07-15 16:22:33.172129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.009 [2024-07-15 16:22:33.304657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.009 [2024-07-15 16:22:33.304740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.009 [2024-07-15 16:22:33.304754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.009 [2024-07-15 16:22:33.304771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.009 [2024-07-15 16:22:33.304780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.009 [2024-07-15 16:22:33.304964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.009 [2024-07-15 16:22:33.305054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.009 [2024-07-15 16:22:33.305204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.009 [2024-07-15 16:22:33.305213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.009 [2024-07-15 16:22:33.365450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.582 16:22:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.582 16:22:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:48.582 16:22:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.582 16:22:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.582 16:22:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 [2024-07-15 16:22:34.018247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 Malloc0 00:07:48.582 [2024-07-15 16:22:34.103934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.582 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65022 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65022 /var/tmp/bdevperf.sock 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65022 ']' 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:48.841 { 00:07:48.841 "params": { 00:07:48.841 "name": "Nvme$subsystem", 00:07:48.841 "trtype": "$TEST_TRANSPORT", 00:07:48.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.841 "adrfam": "ipv4", 00:07:48.841 "trsvcid": "$NVMF_PORT", 00:07:48.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.841 "hdgst": ${hdgst:-false}, 00:07:48.841 "ddgst": ${ddgst:-false} 00:07:48.841 }, 00:07:48.841 "method": "bdev_nvme_attach_controller" 00:07:48.841 } 00:07:48.841 EOF 00:07:48.841 )") 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:48.841 16:22:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:48.841 "params": { 00:07:48.841 "name": "Nvme0", 00:07:48.841 "trtype": "tcp", 00:07:48.841 "traddr": "10.0.0.2", 00:07:48.841 "adrfam": "ipv4", 00:07:48.841 "trsvcid": "4420", 00:07:48.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:48.841 "hdgst": false, 00:07:48.841 "ddgst": false 00:07:48.841 }, 00:07:48.841 "method": "bdev_nvme_attach_controller" 00:07:48.841 }' 00:07:48.841 [2024-07-15 16:22:34.202061] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:48.841 [2024-07-15 16:22:34.202142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65022 ] 00:07:48.841 [2024-07-15 16:22:34.343696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.100 [2024-07-15 16:22:34.464254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.100 [2024-07-15 16:22:34.529928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.100 Running I/O for 10 seconds... 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.668 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.668 [2024-07-15 16:22:35.184838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.184994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.185002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.185011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.668 [2024-07-15 16:22:35.185020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506950 is same with the state(5) to be set 00:07:49.669 [2024-07-15 16:22:35.185525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.185985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.185995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.669 [2024-07-15 16:22:35.186322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.669 [2024-07-15 16:22:35.186333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.670 [2024-07-15 16:22:35.186892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.670 [2024-07-15 16:22:35.186903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x949ec0 is same with the state(5) to be set 00:07:49.670 [2024-07-15 16:22:35.186970] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x949ec0 was disconnected and freed. reset controller. 00:07:49.670 [2024-07-15 16:22:35.188110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:49.670 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.670 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:49.670 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.670 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 task offset: 98304 on job bdev=Nvme0n1 fails 00:07:49.670 00:07:49.670 Latency(us) 00:07:49.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.670 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:49.670 Job: Nvme0n1 ended in about 0.54 seconds with error 00:07:49.670 Verification LBA range: start 0x0 length 0x400 00:07:49.670 Nvme0n1 : 0.54 1422.90 88.93 118.57 0.00 40325.07 4855.62 37891.72 00:07:49.670 =================================================================================================================== 00:07:49.670 Total : 1422.90 88.93 118.57 0.00 40325.07 4855.62 37891.72 00:07:49.670 [2024-07-15 16:22:35.190289] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.670 [2024-07-15 16:22:35.190314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x941d50 (9): Bad file descriptor 00:07:49.670 [2024-07-15 16:22:35.196229] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.670 16:22:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.670 16:22:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65022 00:07:51.046 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65022) - No such process 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:51.046 { 00:07:51.046 "params": { 00:07:51.046 "name": "Nvme$subsystem", 00:07:51.046 "trtype": "$TEST_TRANSPORT", 00:07:51.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.046 "adrfam": "ipv4", 00:07:51.046 "trsvcid": "$NVMF_PORT", 00:07:51.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.046 "hdgst": ${hdgst:-false}, 00:07:51.046 "ddgst": ${ddgst:-false} 00:07:51.046 }, 00:07:51.046 "method": "bdev_nvme_attach_controller" 00:07:51.046 } 00:07:51.046 EOF 00:07:51.046 )") 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:51.046 16:22:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:51.046 "params": { 00:07:51.046 "name": "Nvme0", 00:07:51.046 "trtype": "tcp", 00:07:51.046 "traddr": "10.0.0.2", 00:07:51.046 "adrfam": "ipv4", 00:07:51.046 "trsvcid": "4420", 00:07:51.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.046 "hdgst": false, 00:07:51.046 "ddgst": false 00:07:51.046 }, 00:07:51.046 "method": "bdev_nvme_attach_controller" 00:07:51.046 }' 00:07:51.046 [2024-07-15 16:22:36.282530] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:51.046 [2024-07-15 16:22:36.282663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65060 ] 00:07:51.046 [2024-07-15 16:22:36.422522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.046 [2024-07-15 16:22:36.520880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.046 [2024-07-15 16:22:36.584853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.305 Running I/O for 1 seconds... 00:07:52.274 00:07:52.274 Latency(us) 00:07:52.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.274 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.274 Verification LBA range: start 0x0 length 0x400 00:07:52.274 Nvme0n1 : 1.03 1493.15 93.32 0.00 0.00 42014.23 5153.51 42419.67 00:07:52.274 =================================================================================================================== 00:07:52.274 Total : 1493.15 93.32 0.00 0.00 42014.23 5153.51 42419.67 00:07:52.533 16:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:52.533 16:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:52.533 16:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:52.533 16:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:52.533 16:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:52.533 16:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.533 16:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.533 rmmod nvme_tcp 00:07:52.533 rmmod nvme_fabrics 00:07:52.533 rmmod nvme_keyring 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64967 ']' 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64967 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 64967 ']' 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 64967 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.533 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64967 00:07:52.792 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:52.792 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:52.792 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64967' 00:07:52.792 killing process with pid 64967 00:07:52.792 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 64967 00:07:52.792 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 64967 00:07:52.792 [2024-07-15 16:22:38.327499] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:53.050 ************************************ 00:07:53.050 END TEST nvmf_host_management 00:07:53.050 ************************************ 00:07:53.050 00:07:53.050 real 0m5.932s 00:07:53.050 user 0m22.654s 00:07:53.050 sys 0m1.526s 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.050 16:22:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.050 16:22:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:53.050 16:22:38 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.050 16:22:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.050 16:22:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.050 16:22:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.050 ************************************ 00:07:53.050 START TEST nvmf_lvol 00:07:53.050 ************************************ 00:07:53.050 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.050 * Looking for test storage... 00:07:53.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.050 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.050 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:53.051 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:53.051 Cannot find device "nvmf_tgt_br" 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.310 Cannot find device "nvmf_tgt_br2" 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:53.310 Cannot find device "nvmf_tgt_br" 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:53.310 Cannot find device "nvmf_tgt_br2" 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:07:53.310 00:07:53.310 --- 10.0.0.2 ping statistics --- 00:07:53.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.310 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:53.310 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:53.569 00:07:53.569 --- 10.0.0.3 ping statistics --- 00:07:53.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.569 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:53.569 00:07:53.569 --- 10.0.0.1 ping statistics --- 00:07:53.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.569 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65274 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65274 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65274 ']' 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.569 16:22:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.569 [2024-07-15 16:22:38.939257] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:07:53.569 [2024-07-15 16:22:38.939362] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.569 [2024-07-15 16:22:39.076251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.827 [2024-07-15 16:22:39.195072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.827 [2024-07-15 16:22:39.195135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.827 [2024-07-15 16:22:39.195147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.827 [2024-07-15 16:22:39.195155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.827 [2024-07-15 16:22:39.195163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.827 [2024-07-15 16:22:39.195267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.827 [2024-07-15 16:22:39.195394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.827 [2024-07-15 16:22:39.195396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.827 [2024-07-15 16:22:39.251919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.761 16:22:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.761 16:22:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:54.761 16:22:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.761 16:22:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.761 16:22:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.761 16:22:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.761 16:22:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.761 [2024-07-15 16:22:40.249392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.761 16:22:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:55.020 16:22:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:55.020 16:22:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:55.589 16:22:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:55.589 16:22:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:55.589 16:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:55.848 16:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=777bcad4-d863-4d3d-a9a3-d6cbb4d56df7 00:07:55.848 16:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 777bcad4-d863-4d3d-a9a3-d6cbb4d56df7 lvol 20 00:07:56.106 16:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=33c5c078-125d-4960-aa14-6536bbb5499c 00:07:56.106 16:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.365 16:22:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 33c5c078-125d-4960-aa14-6536bbb5499c 00:07:56.623 16:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.881 [2024-07-15 16:22:42.274416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.881 16:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.139 16:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:57.139 16:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65350 00:07:57.139 16:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:58.072 16:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 33c5c078-125d-4960-aa14-6536bbb5499c MY_SNAPSHOT 00:07:58.330 16:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f31af739-7089-4eb8-9fd2-6b824b59a85b 00:07:58.330 16:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 33c5c078-125d-4960-aa14-6536bbb5499c 30 00:07:58.587 16:22:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f31af739-7089-4eb8-9fd2-6b824b59a85b MY_CLONE 00:07:58.845 16:22:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e9142dd8-2412-4352-8219-15d44dcbf1ca 00:07:58.845 16:22:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e9142dd8-2412-4352-8219-15d44dcbf1ca 00:07:59.410 16:22:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65350 00:08:07.525 Initializing NVMe Controllers 00:08:07.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:07.525 Controller IO queue size 128, less than required. 00:08:07.525 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:07.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:07.525 Initialization complete. Launching workers. 00:08:07.525 ======================================================== 00:08:07.525 Latency(us) 00:08:07.525 Device Information : IOPS MiB/s Average min max 00:08:07.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10422.25 40.71 12285.41 1436.09 63637.68 00:08:07.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10591.65 41.37 12091.76 2559.73 60278.75 00:08:07.525 ======================================================== 00:08:07.525 Total : 21013.90 82.09 12187.80 1436.09 63637.68 00:08:07.525 00:08:07.525 16:22:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.784 16:22:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 33c5c078-125d-4960-aa14-6536bbb5499c 00:08:08.043 16:22:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 777bcad4-d863-4d3d-a9a3-d6cbb4d56df7 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.302 rmmod nvme_tcp 00:08:08.302 rmmod nvme_fabrics 00:08:08.302 rmmod nvme_keyring 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65274 ']' 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65274 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65274 ']' 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65274 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65274 00:08:08.302 killing process with pid 65274 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65274' 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65274 00:08:08.302 16:22:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65274 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:08.561 ************************************ 00:08:08.561 END TEST nvmf_lvol 00:08:08.561 ************************************ 00:08:08.561 00:08:08.561 real 0m15.644s 00:08:08.561 user 1m4.816s 00:08:08.561 sys 0m4.566s 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.561 16:22:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.820 16:22:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:08.820 16:22:54 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.820 16:22:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.820 16:22:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.820 16:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.820 ************************************ 00:08:08.820 START TEST nvmf_lvs_grow 00:08:08.820 ************************************ 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.820 * Looking for test storage... 00:08:08.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:08.820 Cannot find device "nvmf_tgt_br" 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.820 Cannot find device "nvmf_tgt_br2" 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:08.820 Cannot find device "nvmf_tgt_br" 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:08.820 Cannot find device "nvmf_tgt_br2" 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:08.820 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.079 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:09.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:09.080 00:08:09.080 --- 10.0.0.2 ping statistics --- 00:08:09.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.080 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:09.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:09.080 00:08:09.080 --- 10.0.0.3 ping statistics --- 00:08:09.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.080 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:09.080 00:08:09.080 --- 10.0.0.1 ping statistics --- 00:08:09.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.080 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65672 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65672 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65672 ']' 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.080 16:22:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.338 [2024-07-15 16:22:54.640326] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:09.338 [2024-07-15 16:22:54.640412] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.338 [2024-07-15 16:22:54.782404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.597 [2024-07-15 16:22:54.891537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.597 [2024-07-15 16:22:54.891596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.597 [2024-07-15 16:22:54.891606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.597 [2024-07-15 16:22:54.891614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.597 [2024-07-15 16:22:54.891620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.597 [2024-07-15 16:22:54.891648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.597 [2024-07-15 16:22:54.948257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.164 16:22:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.164 16:22:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:10.164 16:22:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.164 16:22:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.164 16:22:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.164 16:22:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.164 16:22:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:10.423 [2024-07-15 16:22:55.894537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.423 ************************************ 00:08:10.423 START TEST lvs_grow_clean 00:08:10.423 ************************************ 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:10.423 16:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.988 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:10.988 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:10.988 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:10.988 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:10.988 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:11.270 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:11.270 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:11.270 16:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f lvol 150 00:08:11.527 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5d0eba14-dd83-4a2a-bf2a-1c11dee779c2 00:08:11.527 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.527 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:11.847 [2024-07-15 16:22:57.241777] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:11.847 [2024-07-15 16:22:57.241860] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:11.847 true 00:08:11.847 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:11.847 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:12.105 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:12.106 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.364 16:22:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5d0eba14-dd83-4a2a-bf2a-1c11dee779c2 00:08:12.624 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.883 [2024-07-15 16:22:58.294337] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.883 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65755 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65755 /var/tmp/bdevperf.sock 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65755 ']' 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.141 16:22:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.141 [2024-07-15 16:22:58.587535] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:13.141 [2024-07-15 16:22:58.587605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65755 ] 00:08:13.399 [2024-07-15 16:22:58.723829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.399 [2024-07-15 16:22:58.834440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.399 [2024-07-15 16:22:58.893322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.336 16:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.336 16:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:14.336 16:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:14.336 Nvme0n1 00:08:14.336 16:22:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:14.594 [ 00:08:14.594 { 00:08:14.594 "name": "Nvme0n1", 00:08:14.594 "aliases": [ 00:08:14.594 "5d0eba14-dd83-4a2a-bf2a-1c11dee779c2" 00:08:14.594 ], 00:08:14.594 "product_name": "NVMe disk", 00:08:14.594 "block_size": 4096, 00:08:14.594 "num_blocks": 38912, 00:08:14.594 "uuid": "5d0eba14-dd83-4a2a-bf2a-1c11dee779c2", 00:08:14.594 "assigned_rate_limits": { 00:08:14.594 "rw_ios_per_sec": 0, 00:08:14.594 "rw_mbytes_per_sec": 0, 00:08:14.594 "r_mbytes_per_sec": 0, 00:08:14.594 "w_mbytes_per_sec": 0 00:08:14.594 }, 00:08:14.594 "claimed": false, 00:08:14.594 "zoned": false, 00:08:14.594 "supported_io_types": { 00:08:14.594 "read": true, 00:08:14.594 "write": true, 00:08:14.594 "unmap": true, 00:08:14.594 "flush": true, 00:08:14.594 "reset": true, 00:08:14.594 "nvme_admin": true, 00:08:14.594 "nvme_io": true, 00:08:14.594 "nvme_io_md": false, 00:08:14.594 "write_zeroes": true, 00:08:14.594 "zcopy": false, 00:08:14.594 "get_zone_info": false, 00:08:14.594 "zone_management": false, 00:08:14.594 "zone_append": false, 00:08:14.594 "compare": true, 00:08:14.594 "compare_and_write": true, 00:08:14.594 "abort": true, 00:08:14.594 "seek_hole": false, 00:08:14.594 "seek_data": false, 00:08:14.594 "copy": true, 00:08:14.594 "nvme_iov_md": false 00:08:14.594 }, 00:08:14.594 "memory_domains": [ 00:08:14.594 { 00:08:14.594 "dma_device_id": "system", 00:08:14.594 "dma_device_type": 1 00:08:14.594 } 00:08:14.594 ], 00:08:14.594 "driver_specific": { 00:08:14.594 "nvme": [ 00:08:14.594 { 00:08:14.594 "trid": { 00:08:14.594 "trtype": "TCP", 00:08:14.594 "adrfam": "IPv4", 00:08:14.594 "traddr": "10.0.0.2", 00:08:14.594 "trsvcid": "4420", 00:08:14.594 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:14.594 }, 00:08:14.594 "ctrlr_data": { 00:08:14.594 "cntlid": 1, 00:08:14.594 "vendor_id": "0x8086", 00:08:14.594 "model_number": "SPDK bdev Controller", 00:08:14.594 "serial_number": "SPDK0", 00:08:14.594 "firmware_revision": "24.09", 00:08:14.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.595 "oacs": { 00:08:14.595 "security": 0, 00:08:14.595 "format": 0, 00:08:14.595 "firmware": 0, 00:08:14.595 "ns_manage": 0 00:08:14.595 }, 00:08:14.595 "multi_ctrlr": true, 00:08:14.595 "ana_reporting": false 00:08:14.595 }, 00:08:14.595 "vs": { 00:08:14.595 "nvme_version": "1.3" 00:08:14.595 }, 00:08:14.595 "ns_data": { 00:08:14.595 "id": 1, 00:08:14.595 "can_share": true 00:08:14.595 } 00:08:14.595 } 00:08:14.595 ], 00:08:14.595 "mp_policy": "active_passive" 00:08:14.595 } 00:08:14.595 } 00:08:14.595 ] 00:08:14.595 16:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65784 00:08:14.595 16:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:14.595 16:23:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:14.853 Running I/O for 10 seconds... 00:08:15.789 Latency(us) 00:08:15.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.789 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:15.789 =================================================================================================================== 00:08:15.789 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:15.789 00:08:16.723 16:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:16.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.723 Nvme0n1 : 2.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:16.723 =================================================================================================================== 00:08:16.723 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:16.723 00:08:16.982 true 00:08:16.982 16:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:16.982 16:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:17.240 16:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:17.240 16:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:17.240 16:23:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65784 00:08:17.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.807 Nvme0n1 : 3.00 7789.33 30.43 0.00 0.00 0.00 0.00 0.00 00:08:17.807 =================================================================================================================== 00:08:17.807 Total : 7789.33 30.43 0.00 0.00 0.00 0.00 0.00 00:08:17.807 00:08:18.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.742 Nvme0n1 : 4.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:18.742 =================================================================================================================== 00:08:18.742 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:18.742 00:08:20.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.117 Nvme0n1 : 5.00 7721.60 30.16 0.00 0.00 0.00 0.00 0.00 00:08:20.117 =================================================================================================================== 00:08:20.117 Total : 7721.60 30.16 0.00 0.00 0.00 0.00 0.00 00:08:20.117 00:08:20.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.683 Nvme0n1 : 6.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:20.683 =================================================================================================================== 00:08:20.683 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:20.683 00:08:22.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.061 Nvme0n1 : 7.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:22.061 =================================================================================================================== 00:08:22.061 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:22.061 00:08:22.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.997 Nvme0n1 : 8.00 7588.25 29.64 0.00 0.00 0.00 0.00 0.00 00:08:22.997 =================================================================================================================== 00:08:22.997 Total : 7588.25 29.64 0.00 0.00 0.00 0.00 0.00 00:08:22.997 00:08:23.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.932 Nvme0n1 : 9.00 7563.56 29.55 0.00 0.00 0.00 0.00 0.00 00:08:23.932 =================================================================================================================== 00:08:23.932 Total : 7563.56 29.55 0.00 0.00 0.00 0.00 0.00 00:08:23.932 00:08:24.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.868 Nvme0n1 : 10.00 7518.40 29.37 0.00 0.00 0.00 0.00 0.00 00:08:24.868 =================================================================================================================== 00:08:24.868 Total : 7518.40 29.37 0.00 0.00 0.00 0.00 0.00 00:08:24.868 00:08:24.868 00:08:24.868 Latency(us) 00:08:24.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.868 Nvme0n1 : 10.01 7523.78 29.39 0.00 0.00 17007.47 13822.14 38606.66 00:08:24.868 =================================================================================================================== 00:08:24.868 Total : 7523.78 29.39 0.00 0.00 17007.47 13822.14 38606.66 00:08:24.868 0 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65755 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65755 ']' 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65755 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65755 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:24.868 killing process with pid 65755 00:08:24.868 Received shutdown signal, test time was about 10.000000 seconds 00:08:24.868 00:08:24.868 Latency(us) 00:08:24.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.868 =================================================================================================================== 00:08:24.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65755' 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65755 00:08:24.868 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65755 00:08:25.128 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.387 16:23:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:25.646 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:25.646 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:25.904 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:25.904 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:25.904 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.189 [2024-07-15 16:23:11.555483] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:26.189 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:26.499 request: 00:08:26.499 { 00:08:26.499 "uuid": "bce5acd4-43f8-43a0-91c8-5abc4d1fea9f", 00:08:26.499 "method": "bdev_lvol_get_lvstores", 00:08:26.499 "req_id": 1 00:08:26.499 } 00:08:26.499 Got JSON-RPC error response 00:08:26.499 response: 00:08:26.499 { 00:08:26.499 "code": -19, 00:08:26.499 "message": "No such device" 00:08:26.499 } 00:08:26.499 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:26.499 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:26.499 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:26.499 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:26.499 16:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.757 aio_bdev 00:08:26.757 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5d0eba14-dd83-4a2a-bf2a-1c11dee779c2 00:08:26.757 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5d0eba14-dd83-4a2a-bf2a-1c11dee779c2 00:08:26.757 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:26.757 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:26.757 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:26.757 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:26.757 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.015 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5d0eba14-dd83-4a2a-bf2a-1c11dee779c2 -t 2000 00:08:27.015 [ 00:08:27.015 { 00:08:27.015 "name": "5d0eba14-dd83-4a2a-bf2a-1c11dee779c2", 00:08:27.015 "aliases": [ 00:08:27.015 "lvs/lvol" 00:08:27.015 ], 00:08:27.015 "product_name": "Logical Volume", 00:08:27.015 "block_size": 4096, 00:08:27.015 "num_blocks": 38912, 00:08:27.015 "uuid": "5d0eba14-dd83-4a2a-bf2a-1c11dee779c2", 00:08:27.015 "assigned_rate_limits": { 00:08:27.015 "rw_ios_per_sec": 0, 00:08:27.015 "rw_mbytes_per_sec": 0, 00:08:27.015 "r_mbytes_per_sec": 0, 00:08:27.015 "w_mbytes_per_sec": 0 00:08:27.015 }, 00:08:27.015 "claimed": false, 00:08:27.015 "zoned": false, 00:08:27.015 "supported_io_types": { 00:08:27.015 "read": true, 00:08:27.015 "write": true, 00:08:27.015 "unmap": true, 00:08:27.015 "flush": false, 00:08:27.015 "reset": true, 00:08:27.015 "nvme_admin": false, 00:08:27.015 "nvme_io": false, 00:08:27.015 "nvme_io_md": false, 00:08:27.015 "write_zeroes": true, 00:08:27.015 "zcopy": false, 00:08:27.015 "get_zone_info": false, 00:08:27.015 "zone_management": false, 00:08:27.015 "zone_append": false, 00:08:27.015 "compare": false, 00:08:27.015 "compare_and_write": false, 00:08:27.015 "abort": false, 00:08:27.015 "seek_hole": true, 00:08:27.015 "seek_data": true, 00:08:27.015 "copy": false, 00:08:27.015 "nvme_iov_md": false 00:08:27.015 }, 00:08:27.015 "driver_specific": { 00:08:27.015 "lvol": { 00:08:27.015 "lvol_store_uuid": "bce5acd4-43f8-43a0-91c8-5abc4d1fea9f", 00:08:27.015 "base_bdev": "aio_bdev", 00:08:27.015 "thin_provision": false, 00:08:27.015 "num_allocated_clusters": 38, 00:08:27.015 "snapshot": false, 00:08:27.015 "clone": false, 00:08:27.015 "esnap_clone": false 00:08:27.015 } 00:08:27.015 } 00:08:27.015 } 00:08:27.015 ] 00:08:27.273 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:27.273 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:27.273 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:27.530 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:27.530 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:27.530 16:23:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.530 16:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.530 16:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5d0eba14-dd83-4a2a-bf2a-1c11dee779c2 00:08:27.788 16:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bce5acd4-43f8-43a0-91c8-5abc4d1fea9f 00:08:28.046 16:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.304 16:23:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.871 ************************************ 00:08:28.871 END TEST lvs_grow_clean 00:08:28.871 ************************************ 00:08:28.871 00:08:28.871 real 0m18.289s 00:08:28.871 user 0m17.100s 00:08:28.871 sys 0m2.665s 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.871 ************************************ 00:08:28.871 START TEST lvs_grow_dirty 00:08:28.871 ************************************ 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.871 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.129 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:29.129 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:29.387 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:29.387 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:29.387 16:23:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:29.645 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:29.645 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:29.645 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 lvol 150 00:08:29.941 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a3df3d0-20df-46ca-a837-62539d62b460 00:08:29.941 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.941 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.210 [2024-07-15 16:23:15.574744] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.211 [2024-07-15 16:23:15.574865] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.211 true 00:08:30.211 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:30.211 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:30.469 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:30.469 16:23:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.728 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a3df3d0-20df-46ca-a837-62539d62b460 00:08:30.728 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:30.987 [2024-07-15 16:23:16.483317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.987 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66025 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66025 /var/tmp/bdevperf.sock 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66025 ']' 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.265 16:23:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.265 [2024-07-15 16:23:16.778472] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:31.265 [2024-07-15 16:23:16.778588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66025 ] 00:08:31.524 [2024-07-15 16:23:16.913718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.524 [2024-07-15 16:23:17.045716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.784 [2024-07-15 16:23:17.106196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.352 16:23:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.352 16:23:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:32.352 16:23:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.611 Nvme0n1 00:08:32.611 16:23:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.900 [ 00:08:32.900 { 00:08:32.900 "name": "Nvme0n1", 00:08:32.900 "aliases": [ 00:08:32.900 "0a3df3d0-20df-46ca-a837-62539d62b460" 00:08:32.900 ], 00:08:32.900 "product_name": "NVMe disk", 00:08:32.900 "block_size": 4096, 00:08:32.900 "num_blocks": 38912, 00:08:32.900 "uuid": "0a3df3d0-20df-46ca-a837-62539d62b460", 00:08:32.900 "assigned_rate_limits": { 00:08:32.900 "rw_ios_per_sec": 0, 00:08:32.900 "rw_mbytes_per_sec": 0, 00:08:32.900 "r_mbytes_per_sec": 0, 00:08:32.900 "w_mbytes_per_sec": 0 00:08:32.900 }, 00:08:32.900 "claimed": false, 00:08:32.900 "zoned": false, 00:08:32.900 "supported_io_types": { 00:08:32.900 "read": true, 00:08:32.900 "write": true, 00:08:32.900 "unmap": true, 00:08:32.900 "flush": true, 00:08:32.901 "reset": true, 00:08:32.901 "nvme_admin": true, 00:08:32.901 "nvme_io": true, 00:08:32.901 "nvme_io_md": false, 00:08:32.901 "write_zeroes": true, 00:08:32.901 "zcopy": false, 00:08:32.901 "get_zone_info": false, 00:08:32.901 "zone_management": false, 00:08:32.901 "zone_append": false, 00:08:32.901 "compare": true, 00:08:32.901 "compare_and_write": true, 00:08:32.901 "abort": true, 00:08:32.901 "seek_hole": false, 00:08:32.901 "seek_data": false, 00:08:32.901 "copy": true, 00:08:32.901 "nvme_iov_md": false 00:08:32.901 }, 00:08:32.901 "memory_domains": [ 00:08:32.901 { 00:08:32.901 "dma_device_id": "system", 00:08:32.901 "dma_device_type": 1 00:08:32.901 } 00:08:32.901 ], 00:08:32.901 "driver_specific": { 00:08:32.901 "nvme": [ 00:08:32.901 { 00:08:32.901 "trid": { 00:08:32.901 "trtype": "TCP", 00:08:32.901 "adrfam": "IPv4", 00:08:32.901 "traddr": "10.0.0.2", 00:08:32.901 "trsvcid": "4420", 00:08:32.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.901 }, 00:08:32.901 "ctrlr_data": { 00:08:32.901 "cntlid": 1, 00:08:32.901 "vendor_id": "0x8086", 00:08:32.901 "model_number": "SPDK bdev Controller", 00:08:32.901 "serial_number": "SPDK0", 00:08:32.901 "firmware_revision": "24.09", 00:08:32.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.901 "oacs": { 00:08:32.901 "security": 0, 00:08:32.901 "format": 0, 00:08:32.901 "firmware": 0, 00:08:32.901 "ns_manage": 0 00:08:32.901 }, 00:08:32.901 "multi_ctrlr": true, 00:08:32.901 "ana_reporting": false 00:08:32.901 }, 00:08:32.901 "vs": { 00:08:32.901 "nvme_version": "1.3" 00:08:32.901 }, 00:08:32.901 "ns_data": { 00:08:32.901 "id": 1, 00:08:32.901 "can_share": true 00:08:32.901 } 00:08:32.901 } 00:08:32.901 ], 00:08:32.901 "mp_policy": "active_passive" 00:08:32.901 } 00:08:32.901 } 00:08:32.901 ] 00:08:32.901 16:23:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66054 00:08:32.901 16:23:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.901 16:23:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:32.901 Running I/O for 10 seconds... 00:08:34.306 Latency(us) 00:08:34.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.306 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:34.306 =================================================================================================================== 00:08:34.306 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:34.306 00:08:34.873 16:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:34.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.873 Nvme0n1 : 2.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:34.873 =================================================================================================================== 00:08:34.873 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:34.873 00:08:35.132 true 00:08:35.132 16:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:35.132 16:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.391 16:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.391 16:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.391 16:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66054 00:08:35.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.958 Nvme0n1 : 3.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:35.958 =================================================================================================================== 00:08:35.958 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:08:35.958 00:08:36.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.894 Nvme0n1 : 4.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:36.894 =================================================================================================================== 00:08:36.894 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:36.894 00:08:37.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.879 Nvme0n1 : 5.00 7670.80 29.96 0.00 0.00 0.00 0.00 0.00 00:08:37.879 =================================================================================================================== 00:08:37.879 Total : 7670.80 29.96 0.00 0.00 0.00 0.00 0.00 00:08:37.879 00:08:39.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.258 Nvme0n1 : 6.00 7641.17 29.85 0.00 0.00 0.00 0.00 0.00 00:08:39.258 =================================================================================================================== 00:08:39.258 Total : 7641.17 29.85 0.00 0.00 0.00 0.00 0.00 00:08:39.258 00:08:40.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.194 Nvme0n1 : 7.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:40.194 =================================================================================================================== 00:08:40.194 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:40.194 00:08:41.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.132 Nvme0n1 : 8.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:41.132 =================================================================================================================== 00:08:41.132 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:41.132 00:08:42.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.067 Nvme0n1 : 9.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:42.067 =================================================================================================================== 00:08:42.067 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:42.067 00:08:43.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.005 Nvme0n1 : 10.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:43.005 =================================================================================================================== 00:08:43.005 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:43.005 00:08:43.005 00:08:43.005 Latency(us) 00:08:43.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.005 Nvme0n1 : 10.01 7498.33 29.29 0.00 0.00 17064.93 13702.98 158239.65 00:08:43.005 =================================================================================================================== 00:08:43.005 Total : 7498.33 29.29 0.00 0.00 17064.93 13702.98 158239.65 00:08:43.005 0 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66025 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66025 ']' 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66025 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66025 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:43.005 killing process with pid 66025 00:08:43.005 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.005 00:08:43.005 Latency(us) 00:08:43.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.005 =================================================================================================================== 00:08:43.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66025' 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66025 00:08:43.005 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66025 00:08:43.264 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.522 16:23:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.781 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:43.781 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65672 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65672 00:08:44.041 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65672 Killed "${NVMF_APP[@]}" "$@" 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66187 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66187 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66187 ']' 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.041 16:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.041 [2024-07-15 16:23:29.532627] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:44.041 [2024-07-15 16:23:29.532725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.300 [2024-07-15 16:23:29.675548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.300 [2024-07-15 16:23:29.790783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.300 [2024-07-15 16:23:29.790828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.300 [2024-07-15 16:23:29.790855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.300 [2024-07-15 16:23:29.790863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.300 [2024-07-15 16:23:29.790893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.300 [2024-07-15 16:23:29.790919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.300 [2024-07-15 16:23:29.847004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.236 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.236 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:45.236 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.236 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.236 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.236 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.236 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.554 [2024-07-15 16:23:30.820052] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.554 [2024-07-15 16:23:30.820496] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.554 [2024-07-15 16:23:30.820900] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0a3df3d0-20df-46ca-a837-62539d62b460 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0a3df3d0-20df-46ca-a837-62539d62b460 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:45.554 16:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.832 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a3df3d0-20df-46ca-a837-62539d62b460 -t 2000 00:08:45.832 [ 00:08:45.832 { 00:08:45.832 "name": "0a3df3d0-20df-46ca-a837-62539d62b460", 00:08:45.832 "aliases": [ 00:08:45.832 "lvs/lvol" 00:08:45.832 ], 00:08:45.832 "product_name": "Logical Volume", 00:08:45.832 "block_size": 4096, 00:08:45.832 "num_blocks": 38912, 00:08:45.832 "uuid": "0a3df3d0-20df-46ca-a837-62539d62b460", 00:08:45.832 "assigned_rate_limits": { 00:08:45.832 "rw_ios_per_sec": 0, 00:08:45.832 "rw_mbytes_per_sec": 0, 00:08:45.832 "r_mbytes_per_sec": 0, 00:08:45.832 "w_mbytes_per_sec": 0 00:08:45.832 }, 00:08:45.832 "claimed": false, 00:08:45.832 "zoned": false, 00:08:45.832 "supported_io_types": { 00:08:45.832 "read": true, 00:08:45.832 "write": true, 00:08:45.832 "unmap": true, 00:08:45.832 "flush": false, 00:08:45.832 "reset": true, 00:08:45.832 "nvme_admin": false, 00:08:45.832 "nvme_io": false, 00:08:45.832 "nvme_io_md": false, 00:08:45.832 "write_zeroes": true, 00:08:45.832 "zcopy": false, 00:08:45.832 "get_zone_info": false, 00:08:45.832 "zone_management": false, 00:08:45.832 "zone_append": false, 00:08:45.832 "compare": false, 00:08:45.832 "compare_and_write": false, 00:08:45.832 "abort": false, 00:08:45.832 "seek_hole": true, 00:08:45.832 "seek_data": true, 00:08:45.832 "copy": false, 00:08:45.832 "nvme_iov_md": false 00:08:45.832 }, 00:08:45.832 "driver_specific": { 00:08:45.832 "lvol": { 00:08:45.832 "lvol_store_uuid": "a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2", 00:08:45.832 "base_bdev": "aio_bdev", 00:08:45.832 "thin_provision": false, 00:08:45.832 "num_allocated_clusters": 38, 00:08:45.832 "snapshot": false, 00:08:45.832 "clone": false, 00:08:45.832 "esnap_clone": false 00:08:45.832 } 00:08:45.832 } 00:08:45.832 } 00:08:45.832 ] 00:08:45.832 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:45.832 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:45.832 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:46.090 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:46.348 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:46.348 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:46.607 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:46.607 16:23:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.607 [2024-07-15 16:23:32.118302] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.607 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:46.607 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:46.866 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:47.124 request: 00:08:47.124 { 00:08:47.124 "uuid": "a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2", 00:08:47.124 "method": "bdev_lvol_get_lvstores", 00:08:47.124 "req_id": 1 00:08:47.124 } 00:08:47.124 Got JSON-RPC error response 00:08:47.124 response: 00:08:47.124 { 00:08:47.124 "code": -19, 00:08:47.124 "message": "No such device" 00:08:47.124 } 00:08:47.124 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:47.124 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:47.124 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:47.124 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:47.124 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.382 aio_bdev 00:08:47.382 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a3df3d0-20df-46ca-a837-62539d62b460 00:08:47.382 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0a3df3d0-20df-46ca-a837-62539d62b460 00:08:47.382 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:47.382 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:47.382 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:47.382 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:47.382 16:23:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.639 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a3df3d0-20df-46ca-a837-62539d62b460 -t 2000 00:08:47.897 [ 00:08:47.897 { 00:08:47.897 "name": "0a3df3d0-20df-46ca-a837-62539d62b460", 00:08:47.897 "aliases": [ 00:08:47.897 "lvs/lvol" 00:08:47.897 ], 00:08:47.897 "product_name": "Logical Volume", 00:08:47.897 "block_size": 4096, 00:08:47.897 "num_blocks": 38912, 00:08:47.897 "uuid": "0a3df3d0-20df-46ca-a837-62539d62b460", 00:08:47.897 "assigned_rate_limits": { 00:08:47.897 "rw_ios_per_sec": 0, 00:08:47.897 "rw_mbytes_per_sec": 0, 00:08:47.897 "r_mbytes_per_sec": 0, 00:08:47.897 "w_mbytes_per_sec": 0 00:08:47.897 }, 00:08:47.897 "claimed": false, 00:08:47.897 "zoned": false, 00:08:47.897 "supported_io_types": { 00:08:47.897 "read": true, 00:08:47.897 "write": true, 00:08:47.897 "unmap": true, 00:08:47.897 "flush": false, 00:08:47.897 "reset": true, 00:08:47.897 "nvme_admin": false, 00:08:47.897 "nvme_io": false, 00:08:47.897 "nvme_io_md": false, 00:08:47.897 "write_zeroes": true, 00:08:47.897 "zcopy": false, 00:08:47.897 "get_zone_info": false, 00:08:47.897 "zone_management": false, 00:08:47.897 "zone_append": false, 00:08:47.897 "compare": false, 00:08:47.897 "compare_and_write": false, 00:08:47.897 "abort": false, 00:08:47.897 "seek_hole": true, 00:08:47.897 "seek_data": true, 00:08:47.897 "copy": false, 00:08:47.897 "nvme_iov_md": false 00:08:47.897 }, 00:08:47.897 "driver_specific": { 00:08:47.897 "lvol": { 00:08:47.897 "lvol_store_uuid": "a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2", 00:08:47.897 "base_bdev": "aio_bdev", 00:08:47.897 "thin_provision": false, 00:08:47.897 "num_allocated_clusters": 38, 00:08:47.897 "snapshot": false, 00:08:47.897 "clone": false, 00:08:47.897 "esnap_clone": false 00:08:47.897 } 00:08:47.897 } 00:08:47.897 } 00:08:47.897 ] 00:08:47.897 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:47.897 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:47.897 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:48.155 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:48.155 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:48.155 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.414 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.414 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0a3df3d0-20df-46ca-a837-62539d62b460 00:08:48.414 16:23:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4b4f3e2-86e7-4705-a1dd-3d4fd8a435c2 00:08:48.672 16:23:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.929 16:23:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.503 ************************************ 00:08:49.503 END TEST lvs_grow_dirty 00:08:49.503 ************************************ 00:08:49.503 00:08:49.503 real 0m20.493s 00:08:49.503 user 0m43.112s 00:08:49.503 sys 0m8.096s 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.503 nvmf_trace.0 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.503 16:23:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:49.503 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.503 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:49.503 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.503 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.503 rmmod nvme_tcp 00:08:49.761 rmmod nvme_fabrics 00:08:49.761 rmmod nvme_keyring 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66187 ']' 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66187 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66187 ']' 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66187 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66187 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66187' 00:08:49.761 killing process with pid 66187 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66187 00:08:49.761 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66187 00:08:50.019 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:50.020 00:08:50.020 real 0m41.228s 00:08:50.020 user 1m6.636s 00:08:50.020 sys 0m11.461s 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.020 16:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.020 ************************************ 00:08:50.020 END TEST nvmf_lvs_grow 00:08:50.020 ************************************ 00:08:50.020 16:23:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:50.020 16:23:35 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.020 16:23:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:50.020 16:23:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.020 16:23:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.020 ************************************ 00:08:50.020 START TEST nvmf_bdev_io_wait 00:08:50.020 ************************************ 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.020 * Looking for test storage... 00:08:50.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:50.020 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:50.279 Cannot find device "nvmf_tgt_br" 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.279 Cannot find device "nvmf_tgt_br2" 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:50.279 Cannot find device "nvmf_tgt_br" 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:50.279 Cannot find device "nvmf_tgt_br2" 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.279 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:50.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:50.537 00:08:50.537 --- 10.0.0.2 ping statistics --- 00:08:50.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.537 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:50.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:08:50.537 00:08:50.537 --- 10.0.0.3 ping statistics --- 00:08:50.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.537 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:50.537 00:08:50.537 --- 10.0.0.1 ping statistics --- 00:08:50.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.537 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66499 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66499 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66499 ']' 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.537 16:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.537 [2024-07-15 16:23:35.906453] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:50.537 [2024-07-15 16:23:35.906539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.537 [2024-07-15 16:23:36.042307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.795 [2024-07-15 16:23:36.131053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.795 [2024-07-15 16:23:36.131315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.795 [2024-07-15 16:23:36.131492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.795 [2024-07-15 16:23:36.131654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.795 [2024-07-15 16:23:36.131814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.795 [2024-07-15 16:23:36.132183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.795 [2024-07-15 16:23:36.132264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.795 [2024-07-15 16:23:36.132223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.795 [2024-07-15 16:23:36.132269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.732 16:23:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 [2024-07-15 16:23:37.015872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 [2024-07-15 16:23:37.028117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 Malloc0 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.732 [2024-07-15 16:23:37.088943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66534 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66536 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66538 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.732 { 00:08:51.732 "params": { 00:08:51.732 "name": "Nvme$subsystem", 00:08:51.732 "trtype": "$TEST_TRANSPORT", 00:08:51.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.732 "adrfam": "ipv4", 00:08:51.732 "trsvcid": "$NVMF_PORT", 00:08:51.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.732 "hdgst": ${hdgst:-false}, 00:08:51.732 "ddgst": ${ddgst:-false} 00:08:51.732 }, 00:08:51.732 "method": "bdev_nvme_attach_controller" 00:08:51.732 } 00:08:51.732 EOF 00:08:51.732 )") 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.732 { 00:08:51.732 "params": { 00:08:51.732 "name": "Nvme$subsystem", 00:08:51.732 "trtype": "$TEST_TRANSPORT", 00:08:51.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.732 "adrfam": "ipv4", 00:08:51.732 "trsvcid": "$NVMF_PORT", 00:08:51.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.732 "hdgst": ${hdgst:-false}, 00:08:51.732 "ddgst": ${ddgst:-false} 00:08:51.732 }, 00:08:51.732 "method": "bdev_nvme_attach_controller" 00:08:51.732 } 00:08:51.732 EOF 00:08:51.732 )") 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.732 { 00:08:51.732 "params": { 00:08:51.732 "name": "Nvme$subsystem", 00:08:51.732 "trtype": "$TEST_TRANSPORT", 00:08:51.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.732 "adrfam": "ipv4", 00:08:51.732 "trsvcid": "$NVMF_PORT", 00:08:51.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.732 "hdgst": ${hdgst:-false}, 00:08:51.732 "ddgst": ${ddgst:-false} 00:08:51.732 }, 00:08:51.732 "method": "bdev_nvme_attach_controller" 00:08:51.732 } 00:08:51.732 EOF 00:08:51.732 )") 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66540 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.732 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.732 { 00:08:51.732 "params": { 00:08:51.732 "name": "Nvme$subsystem", 00:08:51.732 "trtype": "$TEST_TRANSPORT", 00:08:51.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.732 "adrfam": "ipv4", 00:08:51.732 "trsvcid": "$NVMF_PORT", 00:08:51.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.732 "hdgst": ${hdgst:-false}, 00:08:51.732 "ddgst": ${ddgst:-false} 00:08:51.732 }, 00:08:51.732 "method": "bdev_nvme_attach_controller" 00:08:51.732 } 00:08:51.733 EOF 00:08:51.733 )") 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.733 "params": { 00:08:51.733 "name": "Nvme1", 00:08:51.733 "trtype": "tcp", 00:08:51.733 "traddr": "10.0.0.2", 00:08:51.733 "adrfam": "ipv4", 00:08:51.733 "trsvcid": "4420", 00:08:51.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.733 "hdgst": false, 00:08:51.733 "ddgst": false 00:08:51.733 }, 00:08:51.733 "method": "bdev_nvme_attach_controller" 00:08:51.733 }' 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.733 "params": { 00:08:51.733 "name": "Nvme1", 00:08:51.733 "trtype": "tcp", 00:08:51.733 "traddr": "10.0.0.2", 00:08:51.733 "adrfam": "ipv4", 00:08:51.733 "trsvcid": "4420", 00:08:51.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.733 "hdgst": false, 00:08:51.733 "ddgst": false 00:08:51.733 }, 00:08:51.733 "method": "bdev_nvme_attach_controller" 00:08:51.733 }' 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.733 "params": { 00:08:51.733 "name": "Nvme1", 00:08:51.733 "trtype": "tcp", 00:08:51.733 "traddr": "10.0.0.2", 00:08:51.733 "adrfam": "ipv4", 00:08:51.733 "trsvcid": "4420", 00:08:51.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.733 "hdgst": false, 00:08:51.733 "ddgst": false 00:08:51.733 }, 00:08:51.733 "method": "bdev_nvme_attach_controller" 00:08:51.733 }' 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:51.733 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.733 "params": { 00:08:51.733 "name": "Nvme1", 00:08:51.733 "trtype": "tcp", 00:08:51.733 "traddr": "10.0.0.2", 00:08:51.733 "adrfam": "ipv4", 00:08:51.733 "trsvcid": "4420", 00:08:51.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.733 "hdgst": false, 00:08:51.733 "ddgst": false 00:08:51.733 }, 00:08:51.733 "method": "bdev_nvme_attach_controller" 00:08:51.733 }' 00:08:51.733 [2024-07-15 16:23:37.152804] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:51.733 [2024-07-15 16:23:37.153115] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib 16:23:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66534 00:08:51.733 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:51.733 [2024-07-15 16:23:37.162256] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:51.733 [2024-07-15 16:23:37.162528] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:51.733 [2024-07-15 16:23:37.189817] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:51.733 [2024-07-15 16:23:37.190128] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:51.733 [2024-07-15 16:23:37.203298] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:51.733 [2024-07-15 16:23:37.203631] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:51.992 [2024-07-15 16:23:37.387428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.992 [2024-07-15 16:23:37.440635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.992 [2024-07-15 16:23:37.490560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:51.992 [2024-07-15 16:23:37.514256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.992 [2024-07-15 16:23:37.529016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:51.992 [2024-07-15 16:23:37.538446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.250 [2024-07-15 16:23:37.575266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.250 [2024-07-15 16:23:37.587027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.250 [2024-07-15 16:23:37.612655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:52.250 Running I/O for 1 seconds... 00:08:52.250 [2024-07-15 16:23:37.662548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.250 Running I/O for 1 seconds... 00:08:52.250 [2024-07-15 16:23:37.701167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.250 [2024-07-15 16:23:37.749319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.250 Running I/O for 1 seconds... 00:08:52.509 Running I/O for 1 seconds... 00:08:53.441 00:08:53.441 Latency(us) 00:08:53.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.441 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:53.441 Nvme1n1 : 1.00 176850.36 690.82 0.00 0.00 720.92 351.88 815.48 00:08:53.441 =================================================================================================================== 00:08:53.441 Total : 176850.36 690.82 0.00 0.00 720.92 351.88 815.48 00:08:53.441 00:08:53.441 Latency(us) 00:08:53.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.441 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:53.441 Nvme1n1 : 1.03 6447.47 25.19 0.00 0.00 19585.52 8698.41 41228.10 00:08:53.441 =================================================================================================================== 00:08:53.441 Total : 6447.47 25.19 0.00 0.00 19585.52 8698.41 41228.10 00:08:53.441 00:08:53.441 Latency(us) 00:08:53.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.441 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:53.441 Nvme1n1 : 1.01 6039.25 23.59 0.00 0.00 21106.16 7864.32 39083.29 00:08:53.441 =================================================================================================================== 00:08:53.441 Total : 6039.25 23.59 0.00 0.00 21106.16 7864.32 39083.29 00:08:53.441 00:08:53.441 Latency(us) 00:08:53.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.441 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:53.441 Nvme1n1 : 1.01 9371.64 36.61 0.00 0.00 13613.03 5779.08 23950.43 00:08:53.442 =================================================================================================================== 00:08:53.442 Total : 9371.64 36.61 0.00 0.00 13613.03 5779.08 23950.43 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66536 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66538 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66540 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.700 rmmod nvme_tcp 00:08:53.700 rmmod nvme_fabrics 00:08:53.700 rmmod nvme_keyring 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66499 ']' 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66499 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66499 ']' 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66499 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66499 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66499' 00:08:53.700 killing process with pid 66499 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66499 00:08:53.700 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66499 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:53.959 00:08:53.959 real 0m3.994s 00:08:53.959 user 0m17.873s 00:08:53.959 sys 0m2.114s 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.959 ************************************ 00:08:53.959 16:23:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.959 END TEST nvmf_bdev_io_wait 00:08:53.959 ************************************ 00:08:53.959 16:23:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.959 16:23:39 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:53.959 16:23:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.959 16:23:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.959 16:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.959 ************************************ 00:08:53.959 START TEST nvmf_queue_depth 00:08:53.959 ************************************ 00:08:53.959 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.218 * Looking for test storage... 00:08:54.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:08:54.218 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:54.219 Cannot find device "nvmf_tgt_br" 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.219 Cannot find device "nvmf_tgt_br2" 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:54.219 Cannot find device "nvmf_tgt_br" 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:54.219 Cannot find device "nvmf_tgt_br2" 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.219 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.478 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:54.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:08:54.479 00:08:54.479 --- 10.0.0.2 ping statistics --- 00:08:54.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.479 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:54.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:08:54.479 00:08:54.479 --- 10.0.0.3 ping statistics --- 00:08:54.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.479 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:54.479 00:08:54.479 --- 10.0.0.1 ping statistics --- 00:08:54.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.479 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66770 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66770 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66770 ']' 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.479 16:23:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.479 [2024-07-15 16:23:39.953878] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:54.479 [2024-07-15 16:23:39.953959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.792 [2024-07-15 16:23:40.097332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.792 [2024-07-15 16:23:40.202798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.792 [2024-07-15 16:23:40.202861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.792 [2024-07-15 16:23:40.202898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.792 [2024-07-15 16:23:40.202908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.792 [2024-07-15 16:23:40.202914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.792 [2024-07-15 16:23:40.202938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.792 [2024-07-15 16:23:40.257508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 [2024-07-15 16:23:40.958240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 Malloc0 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.727 16:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 [2024-07-15 16:23:41.021228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66809 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66809 /var/tmp/bdevperf.sock 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66809 ']' 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.727 16:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 [2024-07-15 16:23:41.083779] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:08:55.727 [2024-07-15 16:23:41.083876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66809 ] 00:08:55.727 [2024-07-15 16:23:41.227370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.984 [2024-07-15 16:23:41.368040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.984 [2024-07-15 16:23:41.426204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.550 16:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.550 16:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:56.551 16:23:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:56.551 16:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.551 16:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.809 NVMe0n1 00:08:56.809 16:23:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.809 16:23:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:56.809 Running I/O for 10 seconds... 00:09:09.019 00:09:09.019 Latency(us) 00:09:09.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.019 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:09.019 Verification LBA range: start 0x0 length 0x4000 00:09:09.019 NVMe0n1 : 10.08 8178.68 31.95 0.00 0.00 124568.41 23354.65 91988.71 00:09:09.019 =================================================================================================================== 00:09:09.019 Total : 8178.68 31.95 0.00 0.00 124568.41 23354.65 91988.71 00:09:09.019 0 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66809 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66809 ']' 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66809 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66809 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:09.019 killing process with pid 66809 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66809' 00:09:09.019 Received shutdown signal, test time was about 10.000000 seconds 00:09:09.019 00:09:09.019 Latency(us) 00:09:09.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.019 =================================================================================================================== 00:09:09.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66809 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66809 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.019 rmmod nvme_tcp 00:09:09.019 rmmod nvme_fabrics 00:09:09.019 rmmod nvme_keyring 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66770 ']' 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66770 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66770 ']' 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66770 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66770 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:09.019 killing process with pid 66770 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66770' 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66770 00:09:09.019 16:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66770 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:09.019 00:09:09.019 real 0m13.576s 00:09:09.019 user 0m23.710s 00:09:09.019 sys 0m2.126s 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.019 16:23:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.019 ************************************ 00:09:09.019 END TEST nvmf_queue_depth 00:09:09.019 ************************************ 00:09:09.019 16:23:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.019 16:23:53 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.019 16:23:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.019 16:23:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.019 16:23:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.019 ************************************ 00:09:09.019 START TEST nvmf_target_multipath 00:09:09.019 ************************************ 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.019 * Looking for test storage... 00:09:09.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.019 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:09.020 Cannot find device "nvmf_tgt_br" 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.020 Cannot find device "nvmf_tgt_br2" 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:09.020 Cannot find device "nvmf_tgt_br" 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:09.020 Cannot find device "nvmf_tgt_br2" 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:09.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:09:09.020 00:09:09.020 --- 10.0.0.2 ping statistics --- 00:09:09.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.020 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:09:09.020 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:09.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:09.020 00:09:09.020 --- 10.0.0.3 ping statistics --- 00:09:09.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.021 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:09.021 00:09:09.021 --- 10.0.0.1 ping statistics --- 00:09:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.021 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67126 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67126 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67126 ']' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.021 16:23:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 [2024-07-15 16:23:53.590809] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:09.021 [2024-07-15 16:23:53.590899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.021 [2024-07-15 16:23:53.727448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.021 [2024-07-15 16:23:53.833690] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.021 [2024-07-15 16:23:53.833757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.021 [2024-07-15 16:23:53.833768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.021 [2024-07-15 16:23:53.833776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.021 [2024-07-15 16:23:53.833782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.021 [2024-07-15 16:23:53.833933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.021 [2024-07-15 16:23:53.834358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.021 [2024-07-15 16:23:53.834961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.021 [2024-07-15 16:23:53.834970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.021 [2024-07-15 16:23:53.888543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.021 16:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.021 16:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:09.021 16:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.021 16:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.021 16:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.280 16:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.280 16:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.280 [2024-07-15 16:23:54.802417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.538 16:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:09.538 Malloc0 00:09:09.538 16:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:09.819 16:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.106 16:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.365 [2024-07-15 16:23:55.808009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.365 16:23:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:10.624 [2024-07-15 16:23:56.092311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.624 16:23:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:10.882 16:23:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:10.882 16:23:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.883 16:23:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.883 16:23:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.883 16:23:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:10.883 16:23:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67223 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:13.417 16:23:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:13.417 [global] 00:09:13.417 thread=1 00:09:13.417 invalidate=1 00:09:13.417 rw=randrw 00:09:13.417 time_based=1 00:09:13.417 runtime=6 00:09:13.417 ioengine=libaio 00:09:13.417 direct=1 00:09:13.417 bs=4096 00:09:13.417 iodepth=128 00:09:13.417 norandommap=0 00:09:13.417 numjobs=1 00:09:13.417 00:09:13.417 verify_dump=1 00:09:13.417 verify_backlog=512 00:09:13.417 verify_state_save=0 00:09:13.417 do_verify=1 00:09:13.417 verify=crc32c-intel 00:09:13.417 [job0] 00:09:13.417 filename=/dev/nvme0n1 00:09:13.417 Could not set queue depth (nvme0n1) 00:09:13.417 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.417 fio-3.35 00:09:13.417 Starting 1 thread 00:09:14.019 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:14.277 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:14.536 16:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:14.795 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:15.053 16:24:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67223 00:09:19.241 00:09:19.241 job0: (groupid=0, jobs=1): err= 0: pid=67244: Mon Jul 15 16:24:04 2024 00:09:19.241 read: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(249MiB/6004msec) 00:09:19.241 slat (usec): min=6, max=6216, avg=56.00, stdev=221.26 00:09:19.241 clat (usec): min=1784, max=22355, avg=8290.72, stdev=1523.05 00:09:19.241 lat (usec): min=1795, max=22367, avg=8346.72, stdev=1528.29 00:09:19.241 clat percentiles (usec): 00:09:19.241 | 1.00th=[ 4228], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7570], 00:09:19.241 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:09:19.241 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11863], 00:09:19.241 | 99.00th=[12911], 99.50th=[13304], 99.90th=[16450], 99.95th=[20841], 00:09:19.241 | 99.99th=[21890] 00:09:19.241 bw ( KiB/s): min= 2592, max=28656, per=50.33%, avg=21354.18, stdev=7449.78, samples=11 00:09:19.241 iops : min= 648, max= 7164, avg=5338.55, stdev=1862.44, samples=11 00:09:19.241 write: IOPS=6094, BW=23.8MiB/s (25.0MB/s)(127MiB/5332msec); 0 zone resets 00:09:19.241 slat (usec): min=13, max=4210, avg=64.77, stdev=159.79 00:09:19.241 clat (usec): min=2402, max=21865, avg=7187.42, stdev=1376.38 00:09:19.241 lat (usec): min=2427, max=21888, avg=7252.19, stdev=1380.50 00:09:19.242 clat percentiles (usec): 00:09:19.242 | 1.00th=[ 3261], 5.00th=[ 4178], 10.00th=[ 5538], 20.00th=[ 6652], 00:09:19.242 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:09:19.242 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:09:19.242 | 99.00th=[11338], 99.50th=[11994], 99.90th=[19006], 99.95th=[20579], 00:09:19.242 | 99.99th=[21627] 00:09:19.242 bw ( KiB/s): min= 2568, max=28808, per=87.78%, avg=21400.73, stdev=7364.62, samples=11 00:09:19.242 iops : min= 642, max= 7202, avg=5350.18, stdev=1841.15, samples=11 00:09:19.242 lat (msec) : 2=0.01%, 4=1.88%, 10=91.82%, 20=6.23%, 50=0.06% 00:09:19.242 cpu : usr=5.48%, sys=21.41%, ctx=5559, majf=0, minf=133 00:09:19.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:19.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.242 issued rwts: total=63687,32498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.242 00:09:19.242 Run status group 0 (all jobs): 00:09:19.242 READ: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=249MiB (261MB), run=6004-6004msec 00:09:19.242 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=127MiB (133MB), run=5332-5332msec 00:09:19.242 00:09:19.242 Disk stats (read/write): 00:09:19.242 nvme0n1: ios=62662/31974, merge=0/0, ticks=498025/215579, in_queue=713604, util=98.63% 00:09:19.242 16:24:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:19.500 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:19.758 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:19.758 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:19.758 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.758 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:19.758 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:19.758 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.758 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67324 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:19.759 16:24:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:19.759 [global] 00:09:19.759 thread=1 00:09:19.759 invalidate=1 00:09:19.759 rw=randrw 00:09:19.759 time_based=1 00:09:19.759 runtime=6 00:09:19.759 ioengine=libaio 00:09:19.759 direct=1 00:09:19.759 bs=4096 00:09:19.759 iodepth=128 00:09:19.759 norandommap=0 00:09:19.759 numjobs=1 00:09:19.759 00:09:19.759 verify_dump=1 00:09:19.759 verify_backlog=512 00:09:19.759 verify_state_save=0 00:09:19.759 do_verify=1 00:09:19.759 verify=crc32c-intel 00:09:19.759 [job0] 00:09:19.759 filename=/dev/nvme0n1 00:09:20.017 Could not set queue depth (nvme0n1) 00:09:20.017 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.017 fio-3.35 00:09:20.017 Starting 1 thread 00:09:20.954 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:21.213 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.486 16:24:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:21.748 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:22.006 16:24:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67324 00:09:26.212 00:09:26.212 job0: (groupid=0, jobs=1): err= 0: pid=67345: Mon Jul 15 16:24:11 2024 00:09:26.212 read: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(276MiB/6002msec) 00:09:26.212 slat (usec): min=3, max=6048, avg=42.41, stdev=187.34 00:09:26.212 clat (usec): min=944, max=15266, avg=7440.37, stdev=1897.17 00:09:26.212 lat (usec): min=953, max=15275, avg=7482.77, stdev=1912.18 00:09:26.212 clat percentiles (usec): 00:09:26.212 | 1.00th=[ 2868], 5.00th=[ 4047], 10.00th=[ 4752], 20.00th=[ 5800], 00:09:26.212 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8029], 00:09:26.212 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[10683], 00:09:26.212 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13435], 99.95th=[13698], 00:09:26.212 | 99.99th=[14484] 00:09:26.212 bw ( KiB/s): min= 9424, max=38336, per=53.53%, avg=25235.64, stdev=7367.93, samples=11 00:09:26.212 iops : min= 2356, max= 9584, avg=6308.91, stdev=1841.98, samples=11 00:09:26.212 write: IOPS=6883, BW=26.9MiB/s (28.2MB/s)(146MiB/5426msec); 0 zone resets 00:09:26.212 slat (usec): min=5, max=3123, avg=53.28, stdev=132.74 00:09:26.212 clat (usec): min=1278, max=14404, avg=6311.73, stdev=1763.39 00:09:26.212 lat (usec): min=1304, max=14426, avg=6365.01, stdev=1778.45 00:09:26.212 clat percentiles (usec): 00:09:26.212 | 1.00th=[ 2442], 5.00th=[ 3261], 10.00th=[ 3687], 20.00th=[ 4359], 00:09:26.212 | 30.00th=[ 5145], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7242], 00:09:26.212 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8291], 00:09:26.212 | 99.00th=[10552], 99.50th=[11338], 99.90th=[12649], 99.95th=[12780], 00:09:26.212 | 99.99th=[13435] 00:09:26.212 bw ( KiB/s): min= 9968, max=37712, per=91.51%, avg=25194.18, stdev=7213.59, samples=11 00:09:26.212 iops : min= 2492, max= 9428, avg=6298.55, stdev=1803.40, samples=11 00:09:26.212 lat (usec) : 1000=0.01% 00:09:26.212 lat (msec) : 2=0.21%, 4=7.85%, 10=87.96%, 20=3.97% 00:09:26.212 cpu : usr=5.48%, sys=23.88%, ctx=6079, majf=0, minf=96 00:09:26.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:26.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.212 issued rwts: total=70731,37348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.212 00:09:26.212 Run status group 0 (all jobs): 00:09:26.212 READ: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=276MiB (290MB), run=6002-6002msec 00:09:26.212 WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=146MiB (153MB), run=5426-5426msec 00:09:26.212 00:09:26.212 Disk stats (read/write): 00:09:26.212 nvme0n1: ios=69298/37342, merge=0/0, ticks=492531/220080, in_queue=712611, util=98.65% 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:26.212 16:24:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.779 rmmod nvme_tcp 00:09:26.779 rmmod nvme_fabrics 00:09:26.779 rmmod nvme_keyring 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67126 ']' 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67126 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67126 ']' 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67126 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67126 00:09:26.779 killing process with pid 67126 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67126' 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67126 00:09:26.779 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67126 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:27.038 00:09:27.038 real 0m19.341s 00:09:27.038 user 1m12.325s 00:09:27.038 sys 0m9.938s 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.038 16:24:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.038 ************************************ 00:09:27.038 END TEST nvmf_target_multipath 00:09:27.038 ************************************ 00:09:27.038 16:24:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.038 16:24:12 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:27.038 16:24:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.038 16:24:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.038 16:24:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.038 ************************************ 00:09:27.038 START TEST nvmf_zcopy 00:09:27.038 ************************************ 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:27.038 * Looking for test storage... 00:09:27.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.038 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:27.296 Cannot find device "nvmf_tgt_br" 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.296 Cannot find device "nvmf_tgt_br2" 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:27.296 Cannot find device "nvmf_tgt_br" 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:27.296 Cannot find device "nvmf_tgt_br2" 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.296 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:27.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:27.555 00:09:27.555 --- 10.0.0.2 ping statistics --- 00:09:27.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.555 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:27.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:27.555 00:09:27.555 --- 10.0.0.3 ping statistics --- 00:09:27.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.555 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:27.555 00:09:27.555 --- 10.0.0.1 ping statistics --- 00:09:27.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.555 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67592 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67592 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67592 ']' 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.555 16:24:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [2024-07-15 16:24:13.006446] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:27.555 [2024-07-15 16:24:13.007351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.814 [2024-07-15 16:24:13.147933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.814 [2024-07-15 16:24:13.268821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.814 [2024-07-15 16:24:13.269073] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.814 [2024-07-15 16:24:13.269294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.814 [2024-07-15 16:24:13.269601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.814 [2024-07-15 16:24:13.269727] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.814 [2024-07-15 16:24:13.269888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.814 [2024-07-15 16:24:13.329267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.749 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.749 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:28.749 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.749 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.750 [2024-07-15 16:24:14.059719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.750 [2024-07-15 16:24:14.075798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.750 malloc0 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:28.750 { 00:09:28.750 "params": { 00:09:28.750 "name": "Nvme$subsystem", 00:09:28.750 "trtype": "$TEST_TRANSPORT", 00:09:28.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.750 "adrfam": "ipv4", 00:09:28.750 "trsvcid": "$NVMF_PORT", 00:09:28.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.750 "hdgst": ${hdgst:-false}, 00:09:28.750 "ddgst": ${ddgst:-false} 00:09:28.750 }, 00:09:28.750 "method": "bdev_nvme_attach_controller" 00:09:28.750 } 00:09:28.750 EOF 00:09:28.750 )") 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:28.750 16:24:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:28.750 "params": { 00:09:28.750 "name": "Nvme1", 00:09:28.750 "trtype": "tcp", 00:09:28.750 "traddr": "10.0.0.2", 00:09:28.750 "adrfam": "ipv4", 00:09:28.750 "trsvcid": "4420", 00:09:28.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.750 "hdgst": false, 00:09:28.750 "ddgst": false 00:09:28.750 }, 00:09:28.750 "method": "bdev_nvme_attach_controller" 00:09:28.750 }' 00:09:28.750 [2024-07-15 16:24:14.175183] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:28.750 [2024-07-15 16:24:14.175278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67625 ] 00:09:29.009 [2024-07-15 16:24:14.316985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.009 [2024-07-15 16:24:14.436239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.009 [2024-07-15 16:24:14.502545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.268 Running I/O for 10 seconds... 00:09:39.243 00:09:39.243 Latency(us) 00:09:39.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.243 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:39.243 Verification LBA range: start 0x0 length 0x1000 00:09:39.243 Nvme1n1 : 10.01 5823.92 45.50 0.00 0.00 21908.65 554.82 31457.28 00:09:39.243 =================================================================================================================== 00:09:39.243 Total : 5823.92 45.50 0.00 0.00 21908.65 554.82 31457.28 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67747 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:39.501 { 00:09:39.501 "params": { 00:09:39.501 "name": "Nvme$subsystem", 00:09:39.501 "trtype": "$TEST_TRANSPORT", 00:09:39.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.501 "adrfam": "ipv4", 00:09:39.501 "trsvcid": "$NVMF_PORT", 00:09:39.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.501 "hdgst": ${hdgst:-false}, 00:09:39.501 "ddgst": ${ddgst:-false} 00:09:39.501 }, 00:09:39.501 "method": "bdev_nvme_attach_controller" 00:09:39.501 } 00:09:39.501 EOF 00:09:39.501 )") 00:09:39.501 [2024-07-15 16:24:24.861059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.861107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:39.501 16:24:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:39.501 "params": { 00:09:39.501 "name": "Nvme1", 00:09:39.501 "trtype": "tcp", 00:09:39.501 "traddr": "10.0.0.2", 00:09:39.501 "adrfam": "ipv4", 00:09:39.501 "trsvcid": "4420", 00:09:39.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.501 "hdgst": false, 00:09:39.501 "ddgst": false 00:09:39.501 }, 00:09:39.501 "method": "bdev_nvme_attach_controller" 00:09:39.501 }' 00:09:39.501 [2024-07-15 16:24:24.873023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.873055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.881021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.881050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.893022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.893072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.905027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.905056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.911511] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:39.501 [2024-07-15 16:24:24.911594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67747 ] 00:09:39.501 [2024-07-15 16:24:24.917026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.917056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.929028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.929064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.941035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.941064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.953033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.953061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.965033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.965060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.977055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.977083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:24.989060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:24.989089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:25.001059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:25.001086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:25.013062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:25.013090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:25.025087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:25.025144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:25.037070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:25.037099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:25.049074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.501 [2024-07-15 16:24:25.049104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.501 [2024-07-15 16:24:25.049240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.760 [2024-07-15 16:24:25.057078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.057107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.065078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.065105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.073079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.073106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.081082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.081109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.089083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.089110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.097084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.097111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.105111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.105140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.113094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.113120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.121102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.121131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.133105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.133134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.141105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.141132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.149105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.149132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.157107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.157134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.165131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.165159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.168614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.760 [2024-07-15 16:24:25.177118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.177145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.185122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.185150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.193128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.193157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.201129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.201158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.213155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.213189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.221146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.221191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.229149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.229194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.233678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.760 [2024-07-15 16:24:25.237146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.237179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.245170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.245199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.253153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.253182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.261158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.261186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.269153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.269195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.277241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.277273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.285227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.285274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.293237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.293283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.760 [2024-07-15 16:24:25.301275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.760 [2024-07-15 16:24:25.301307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.309229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.309263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.317239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.317270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.325263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.325294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.333270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.333298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.341287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.341321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.349301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.349330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 Running I/O for 5 seconds... 00:09:40.020 [2024-07-15 16:24:25.357305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.357338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.371346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.371384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.382805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.382844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.398390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.398426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.414030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.414086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.424473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.424509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.436907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.436984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.452146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.452182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.469433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.469471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.479215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.479266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.490646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.490682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.501783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.501837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.513840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.513937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.530889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.530940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.541190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.541241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.020 [2024-07-15 16:24:25.555822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.020 [2024-07-15 16:24:25.555906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.573468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.573507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.583474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.583509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.599047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.599088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.609511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.609547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.624520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.624560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.641433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.641490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.652066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.652109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.666399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.666438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.677157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.677224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.689568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.689608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.700524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.700562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.717764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.717799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.733932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.733986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.751011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.751050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.761678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.761732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.773594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.773630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.785013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.785054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.803197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.803260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.279 [2024-07-15 16:24:25.818318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.279 [2024-07-15 16:24:25.818361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.280 [2024-07-15 16:24:25.828319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.280 [2024-07-15 16:24:25.828361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.840896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.840946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.855992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.856029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.871273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.871309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.881252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.881287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.893613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.893651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.905149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.905200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.916562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.916602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.934153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.934198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.950580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.950617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.966547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.966588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.976148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.976183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:25.988926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:25.988964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:26.005005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:26.005040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:26.021155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:26.021193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:26.038765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:26.038812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:26.054950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:26.054991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:26.066076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:26.066113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.538 [2024-07-15 16:24:26.081248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.538 [2024-07-15 16:24:26.081288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.096233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.096285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.105914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.105948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.118409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.118445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.129586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.129622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.145939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.146002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.156159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.156209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.171188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.171225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.181829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.181890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.196495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.196533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.213542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.213578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.223547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.223582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.238826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.238890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.249510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.249546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.263986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.264023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.280169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.280208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.298248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.298298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.313148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.313185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.322996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.323034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.796 [2024-07-15 16:24:26.339303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.796 [2024-07-15 16:24:26.339341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.355642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.355682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.365796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.365835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.377427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.377463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.388673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.388711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.399538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.399577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.415385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.415423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.430864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.430920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.440706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.440744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.452938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.452996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.463747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.463785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.474756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.474794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.486913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.486975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.496820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.496933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.508956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.054 [2024-07-15 16:24:26.508994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.054 [2024-07-15 16:24:26.520407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.055 [2024-07-15 16:24:26.520445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.055 [2024-07-15 16:24:26.535268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.055 [2024-07-15 16:24:26.535306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.055 [2024-07-15 16:24:26.551150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.055 [2024-07-15 16:24:26.551190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.055 [2024-07-15 16:24:26.569315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.055 [2024-07-15 16:24:26.569354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.055 [2024-07-15 16:24:26.584455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.055 [2024-07-15 16:24:26.584493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.055 [2024-07-15 16:24:26.594905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.055 [2024-07-15 16:24:26.594953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.610002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.610041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.620781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.620818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.635881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.635966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.653985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.654021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.664757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.664800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.680232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.680269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.695621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.695661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.705608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.705656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.721113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.721153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.731441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.731476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.746241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.746278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.763280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.763330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.773196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.773248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.313 [2024-07-15 16:24:26.784922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.313 [2024-07-15 16:24:26.784970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.314 [2024-07-15 16:24:26.799454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.314 [2024-07-15 16:24:26.799490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.314 [2024-07-15 16:24:26.815075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.314 [2024-07-15 16:24:26.815110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.314 [2024-07-15 16:24:26.832909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.314 [2024-07-15 16:24:26.832958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.314 [2024-07-15 16:24:26.843728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.314 [2024-07-15 16:24:26.843765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.314 [2024-07-15 16:24:26.857013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.314 [2024-07-15 16:24:26.857048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-07-15 16:24:26.873324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-07-15 16:24:26.873358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-07-15 16:24:26.883317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-07-15 16:24:26.883352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-07-15 16:24:26.898629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-07-15 16:24:26.898666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-07-15 16:24:26.914166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:26.914203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:26.924005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:26.924041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:26.940027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:26.940065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:26.957380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:26.957418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:26.966892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:26.966958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:26.978924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:26.978964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:26.989744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:26.989782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.000713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.000753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.017540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.017579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.034407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.034446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.051114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.051151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.068049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.068084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.078357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.078395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.093202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.093237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.573 [2024-07-15 16:24:27.109776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.573 [2024-07-15 16:24:27.109812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.126572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.126612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.136813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.136847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.152540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.152577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.163256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.163292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.176486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.176525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.192998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.193034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.208671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.208709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.218388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.218423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.235005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.235039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.250585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.831 [2024-07-15 16:24:27.250620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.831 [2024-07-15 16:24:27.260140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.260191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.272224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.272261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.283460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.283497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.298309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.298345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.308365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.308402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.324603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.324642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.341015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.341049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.350601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.350635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.832 [2024-07-15 16:24:27.366564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.832 [2024-07-15 16:24:27.366600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.381882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.381934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.391768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.391804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.405011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.405045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.415809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.415846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.433317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.433358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.449692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.449729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.460015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.460051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.475341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.475376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.490641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.490676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.500165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.500201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.512630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.512683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.523484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.523536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.537012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.537050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.552835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.552900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.563347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.563384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.578507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.578542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.594801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.594837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.611435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.611470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-07-15 16:24:27.627476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-07-15 16:24:27.627511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.645498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.645541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.655803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.655839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.670731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.670770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.686330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.686369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.696640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.696677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.708376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.708415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.718984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.719019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.734284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.734319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.744504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.744541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.759415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.759451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.774337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.774389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.784738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.784779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.796174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.796212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.806958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.806995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.819138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.819175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.835122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.835158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.851999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.852032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.868240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.868275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.878018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.878054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.891522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.891575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.902522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.902574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.913383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.913420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.386 [2024-07-15 16:24:27.928234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.386 [2024-07-15 16:24:27.928269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:27.945066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:27.945101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:27.955067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:27.955106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:27.970382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:27.970416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:27.980943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:27.980988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:27.996066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:27.996102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.006116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.006154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.020978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.021015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.031305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.031341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.045542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.045595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.055867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.055920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.071309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.071359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.086819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.086871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.105399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.105437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.120054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.120093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.129281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.129332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.141648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.141686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.152659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.152695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.164482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.164519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.176075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.176111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.645 [2024-07-15 16:24:28.191051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.645 [2024-07-15 16:24:28.191086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.201096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.201129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.215413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.215448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.225385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.225421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.237466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.237500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.248140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.248175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.262828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.262913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.272656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.272694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.286420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.286457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.302194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.302229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.319666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.319702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.329948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.329985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.344041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.344076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.355080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.355113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.370065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.370098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.387027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.387062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.402696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.402730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.412668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.412715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.424611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.424650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.904 [2024-07-15 16:24:28.439507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.904 [2024-07-15 16:24:28.439566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.454789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.454828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.464501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.464539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.478470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.478522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.493688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.493725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.509044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.509081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.524383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.524421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.533995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.534052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.546480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.546515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.562391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.562428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.579387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.161 [2024-07-15 16:24:28.579425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.161 [2024-07-15 16:24:28.595813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.162 [2024-07-15 16:24:28.595851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.162 [2024-07-15 16:24:28.611992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.162 [2024-07-15 16:24:28.612027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.162 [2024-07-15 16:24:28.629012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.162 [2024-07-15 16:24:28.629049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.162 [2024-07-15 16:24:28.646493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.162 [2024-07-15 16:24:28.646531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.162 [2024-07-15 16:24:28.661471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.162 [2024-07-15 16:24:28.661508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.162 [2024-07-15 16:24:28.677600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.162 [2024-07-15 16:24:28.677637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.162 [2024-07-15 16:24:28.694757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.162 [2024-07-15 16:24:28.694792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.419 [2024-07-15 16:24:28.711683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.419 [2024-07-15 16:24:28.711720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.419 [2024-07-15 16:24:28.728274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.419 [2024-07-15 16:24:28.728334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.419 [2024-07-15 16:24:28.744872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.744933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.762326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.762362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.777361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.777399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.786941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.786976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.803096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.803134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.812907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.812941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.824371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.824411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.842134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.842176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.858573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.858613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.876666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.876709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.892008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.892051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.908800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.908882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.918803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.918840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.930004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.930042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.945781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.945836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.955706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.955744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.420 [2024-07-15 16:24:28.967702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.420 [2024-07-15 16:24:28.967742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:28.978845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:28.978892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:28.991113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:28.991150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.001143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.001179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.014031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.014068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.024686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.024723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.035590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.035627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.047897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.047933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.058473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.058511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.070658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.070697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.081849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.081917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.093197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.093236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.104058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.104095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.119713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.119761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.135596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.135640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.145403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.145442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.158521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.158567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.168843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.168895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.183547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.183594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.678 [2024-07-15 16:24:29.194069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.678 [2024-07-15 16:24:29.194108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.679 [2024-07-15 16:24:29.205537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.679 [2024-07-15 16:24:29.205577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.679 [2024-07-15 16:24:29.216342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.679 [2024-07-15 16:24:29.216379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.231498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.231538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.248242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.248282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.257782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.257820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.270124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.270163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.284862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.284911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.300300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.300365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.309951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.309988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.322321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.322357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.333556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.333592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.344417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.344454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.359201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.936 [2024-07-15 16:24:29.359254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.936 [2024-07-15 16:24:29.376744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.376786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.937 [2024-07-15 16:24:29.392962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.393004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.937 [2024-07-15 16:24:29.409982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.410020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.937 [2024-07-15 16:24:29.427053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.427092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.937 [2024-07-15 16:24:29.441555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.441594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.937 [2024-07-15 16:24:29.451745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.451782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.937 [2024-07-15 16:24:29.463500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.463538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.937 [2024-07-15 16:24:29.474085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.937 [2024-07-15 16:24:29.474124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.489180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.489225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.505065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.505102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.523206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.523259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.538852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.538905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.549327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.549361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.561197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.561232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.572338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.572375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.589891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.589973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.605617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.605658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.622163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.622201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.640888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.640988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.652051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.652089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.665175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.665228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.675092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.675128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.687115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.687155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.701798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.701836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.711721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.711759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.723579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.723617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.195 [2024-07-15 16:24:29.735198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.195 [2024-07-15 16:24:29.735235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.746823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.746874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.758258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.758292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.771288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.771324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.781729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.781769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.793144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.793182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.803786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.803839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.814966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.815004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.827357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.454 [2024-07-15 16:24:29.827394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.454 [2024-07-15 16:24:29.837423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.837459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.848937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.848977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.864746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.864788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.881760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.881801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.891541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.891596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.903859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.903962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.915010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.915047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.931344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.931385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.947716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.947753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.964522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.964564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.981082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.981122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.455 [2024-07-15 16:24:29.991278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.455 [2024-07-15 16:24:29.991330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.006508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.006549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.022574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.022612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.032849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.032948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.045019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.045056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.055842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.055889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.071327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.071368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.087604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.087646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.097847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.097902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.112297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.112348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.128973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.129012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.138776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.138816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.713 [2024-07-15 16:24:30.154229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.713 [2024-07-15 16:24:30.154270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.164234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.164272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.179040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.179077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.188604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.188642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.200682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.200721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.211433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.211473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.224287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.224353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.234405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.234444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.250209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.250247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.714 [2024-07-15 16:24:30.261171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.714 [2024-07-15 16:24:30.261209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.276462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.276502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.287180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.287233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.298314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.298350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.310386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.310421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.320457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.320494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.332370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.332407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.343439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.343476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.358161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.358200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 00:09:44.973 Latency(us) 00:09:44.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.973 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:44.973 Nvme1n1 : 5.01 11339.21 88.59 0.00 0.00 11271.83 4855.62 20375.74 00:09:44.973 =================================================================================================================== 00:09:44.973 Total : 11339.21 88.59 0.00 0.00 11271.83 4855.62 20375.74 00:09:44.973 [2024-07-15 16:24:30.363718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.363748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.371722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.371758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.379710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.379742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.387720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.387756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.395733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.395773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.403733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.403784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.411736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.411773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.419733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.419770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.427730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.427765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.435750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.435786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.443727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.443761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.451749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.451783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.459743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.459780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.467740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.467777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.475739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.475774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.483751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.483786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.491756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.491792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.499739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.499769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.507741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.507770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.973 [2024-07-15 16:24:30.515761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.973 [2024-07-15 16:24:30.515800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.523755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.523786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.531762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.531796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.539750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.539777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.547750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.547777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.563786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.563828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.571760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.571788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.583761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.583788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 [2024-07-15 16:24:30.595770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.231 [2024-07-15 16:24:30.595798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.231 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67747) - No such process 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67747 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.231 delay0 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.231 16:24:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:45.232 [2024-07-15 16:24:30.779482] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:51.791 Initializing NVMe Controllers 00:09:51.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:51.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:51.791 Initialization complete. Launching workers. 00:09:51.791 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 92 00:09:51.791 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 379, failed to submit 33 00:09:51.791 success 269, unsuccess 110, failed 0 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.791 rmmod nvme_tcp 00:09:51.791 rmmod nvme_fabrics 00:09:51.791 rmmod nvme_keyring 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67592 ']' 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67592 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67592 ']' 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67592 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67592 00:09:51.791 killing process with pid 67592 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67592' 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67592 00:09:51.791 16:24:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67592 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:51.791 00:09:51.791 real 0m24.763s 00:09:51.791 user 0m40.433s 00:09:51.791 sys 0m6.884s 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.791 16:24:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.791 ************************************ 00:09:51.791 END TEST nvmf_zcopy 00:09:51.791 ************************************ 00:09:51.791 16:24:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:51.791 16:24:37 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:51.791 16:24:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:51.791 16:24:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.791 16:24:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.791 ************************************ 00:09:51.791 START TEST nvmf_nmic 00:09:51.791 ************************************ 00:09:51.791 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:52.049 * Looking for test storage... 00:09:52.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:52.049 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:52.050 Cannot find device "nvmf_tgt_br" 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.050 Cannot find device "nvmf_tgt_br2" 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:52.050 Cannot find device "nvmf_tgt_br" 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:52.050 Cannot find device "nvmf_tgt_br2" 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:52.050 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:52.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:52.308 00:09:52.308 --- 10.0.0.2 ping statistics --- 00:09:52.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.308 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:52.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:09:52.308 00:09:52.308 --- 10.0.0.3 ping statistics --- 00:09:52.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.308 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:52.308 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:52.309 00:09:52.309 --- 10.0.0.1 ping statistics --- 00:09:52.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.309 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68076 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68076 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68076 ']' 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.309 16:24:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.309 [2024-07-15 16:24:37.791270] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:52.309 [2024-07-15 16:24:37.791357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.566 [2024-07-15 16:24:37.931603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.566 [2024-07-15 16:24:38.046579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.566 [2024-07-15 16:24:38.046880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.566 [2024-07-15 16:24:38.047080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.566 [2024-07-15 16:24:38.047268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.566 [2024-07-15 16:24:38.047372] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.566 [2024-07-15 16:24:38.047670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.566 [2024-07-15 16:24:38.047804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.566 [2024-07-15 16:24:38.048282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.566 [2024-07-15 16:24:38.048294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.566 [2024-07-15 16:24:38.102283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 [2024-07-15 16:24:38.804777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 Malloc0 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 [2024-07-15 16:24:38.868942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 test case1: single bdev can't be used in multiple subsystems 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 [2024-07-15 16:24:38.892807] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:53.504 [2024-07-15 16:24:38.892845] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:53.504 [2024-07-15 16:24:38.892868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.504 request: 00:09:53.504 { 00:09:53.504 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:53.504 "namespace": { 00:09:53.504 "bdev_name": "Malloc0", 00:09:53.504 "no_auto_visible": false 00:09:53.504 }, 00:09:53.504 "method": "nvmf_subsystem_add_ns", 00:09:53.504 "req_id": 1 00:09:53.504 } 00:09:53.504 Got JSON-RPC error response 00:09:53.504 response: 00:09:53.504 { 00:09:53.504 "code": -32602, 00:09:53.504 "message": "Invalid parameters" 00:09:53.504 } 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:53.504 Adding namespace failed - expected result. 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:53.504 test case2: host connect to nvmf target in multiple paths 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.504 [2024-07-15 16:24:38.904971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.504 16:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.504 16:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:53.767 16:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.767 16:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:53.767 16:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.767 16:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:53.767 16:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:55.712 16:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:55.712 16:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:55.712 16:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.712 16:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:55.712 16:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.712 16:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:55.712 16:24:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:55.712 [global] 00:09:55.713 thread=1 00:09:55.713 invalidate=1 00:09:55.713 rw=write 00:09:55.713 time_based=1 00:09:55.713 runtime=1 00:09:55.713 ioengine=libaio 00:09:55.713 direct=1 00:09:55.713 bs=4096 00:09:55.713 iodepth=1 00:09:55.713 norandommap=0 00:09:55.713 numjobs=1 00:09:55.713 00:09:55.713 verify_dump=1 00:09:55.713 verify_backlog=512 00:09:55.713 verify_state_save=0 00:09:55.713 do_verify=1 00:09:55.713 verify=crc32c-intel 00:09:55.713 [job0] 00:09:55.713 filename=/dev/nvme0n1 00:09:55.713 Could not set queue depth (nvme0n1) 00:09:55.970 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.970 fio-3.35 00:09:55.970 Starting 1 thread 00:09:56.919 00:09:56.919 job0: (groupid=0, jobs=1): err= 0: pid=68162: Mon Jul 15 16:24:42 2024 00:09:56.919 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:56.919 slat (nsec): min=11403, max=40760, avg=13718.56, stdev=2509.14 00:09:56.919 clat (usec): min=141, max=614, avg=178.72, stdev=16.35 00:09:56.919 lat (usec): min=155, max=626, avg=192.44, stdev=16.31 00:09:56.919 clat percentiles (usec): 00:09:56.919 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:09:56.919 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:09:56.919 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:09:56.919 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 243], 99.95th=[ 363], 00:09:56.919 | 99.99th=[ 611] 00:09:56.919 write: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1001msec); 0 zone resets 00:09:56.919 slat (usec): min=16, max=112, avg=21.02, stdev= 4.64 00:09:56.919 clat (usec): min=87, max=319, avg=107.84, stdev=11.25 00:09:56.919 lat (usec): min=105, max=340, avg=128.85, stdev=12.82 00:09:56.919 clat percentiles (usec): 00:09:56.919 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:09:56.919 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:09:56.919 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 127], 00:09:56.919 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 172], 99.95th=[ 245], 00:09:56.919 | 99.99th=[ 318] 00:09:56.919 bw ( KiB/s): min=12288, max=12288, per=99.58%, avg=12288.00, stdev= 0.00, samples=1 00:09:56.919 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:56.919 lat (usec) : 100=11.14%, 250=88.80%, 500=0.05%, 750=0.02% 00:09:56.919 cpu : usr=3.00%, sys=7.80%, ctx=6160, majf=0, minf=2 00:09:56.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.919 issued rwts: total=3072,3088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.919 00:09:56.919 Run status group 0 (all jobs): 00:09:56.919 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:56.919 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.1MiB (12.6MB), run=1001-1001msec 00:09:56.919 00:09:56.919 Disk stats (read/write): 00:09:56.919 nvme0n1: ios=2610/3068, merge=0/0, ticks=489/354, in_queue=843, util=91.38% 00:09:56.919 16:24:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.177 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.177 rmmod nvme_tcp 00:09:57.177 rmmod nvme_fabrics 00:09:57.436 rmmod nvme_keyring 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68076 ']' 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68076 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68076 ']' 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68076 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68076 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.436 killing process with pid 68076 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68076' 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68076 00:09:57.436 16:24:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68076 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:57.694 00:09:57.694 real 0m5.779s 00:09:57.694 user 0m18.542s 00:09:57.694 sys 0m2.268s 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.694 ************************************ 00:09:57.694 END TEST nvmf_nmic 00:09:57.694 16:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.694 ************************************ 00:09:57.694 16:24:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:57.694 16:24:43 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:57.694 16:24:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.694 16:24:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.694 16:24:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.694 ************************************ 00:09:57.694 START TEST nvmf_fio_target 00:09:57.694 ************************************ 00:09:57.694 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:57.694 * Looking for test storage... 00:09:57.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.694 16:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.694 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:57.694 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.694 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.694 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.695 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.952 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:57.953 Cannot find device "nvmf_tgt_br" 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.953 Cannot find device "nvmf_tgt_br2" 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:57.953 Cannot find device "nvmf_tgt_br" 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:57.953 Cannot find device "nvmf_tgt_br2" 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.953 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:58.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:58.212 00:09:58.212 --- 10.0.0.2 ping statistics --- 00:09:58.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.212 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:58.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:58.212 00:09:58.212 --- 10.0.0.3 ping statistics --- 00:09:58.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.212 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:58.212 00:09:58.212 --- 10.0.0.1 ping statistics --- 00:09:58.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.212 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68342 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68342 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68342 ']' 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.212 16:24:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.212 [2024-07-15 16:24:43.664216] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:09:58.212 [2024-07-15 16:24:43.664325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.470 [2024-07-15 16:24:43.804212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.470 [2024-07-15 16:24:43.929273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.470 [2024-07-15 16:24:43.929324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.470 [2024-07-15 16:24:43.929338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.470 [2024-07-15 16:24:43.929350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.470 [2024-07-15 16:24:43.929359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.470 [2024-07-15 16:24:43.929514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.470 [2024-07-15 16:24:43.930135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.470 [2024-07-15 16:24:43.930212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.470 [2024-07-15 16:24:43.930217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.470 [2024-07-15 16:24:43.986867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.404 16:24:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.404 16:24:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:59.404 16:24:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.404 16:24:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.404 16:24:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.404 16:24:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.404 16:24:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.662 [2024-07-15 16:24:44.973316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.662 16:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.920 16:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:59.920 16:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.179 16:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:00.179 16:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.436 16:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:00.437 16:24:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.695 16:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:00.695 16:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:00.956 16:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.216 16:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:01.216 16:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.474 16:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:01.474 16:24:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.732 16:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:01.732 16:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:01.990 16:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.248 16:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:02.248 16:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.507 16:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:02.507 16:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.765 16:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.023 [2024-07-15 16:24:48.456464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.023 16:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:03.281 16:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:03.539 16:24:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.539 16:24:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:03.539 16:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:03.539 16:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.539 16:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:03.539 16:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:03.539 16:24:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:06.076 16:24:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:06.076 16:24:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:06.076 16:24:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.076 16:24:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:06.076 16:24:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.076 16:24:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:06.076 16:24:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:06.076 [global] 00:10:06.076 thread=1 00:10:06.076 invalidate=1 00:10:06.076 rw=write 00:10:06.076 time_based=1 00:10:06.076 runtime=1 00:10:06.076 ioengine=libaio 00:10:06.076 direct=1 00:10:06.076 bs=4096 00:10:06.076 iodepth=1 00:10:06.076 norandommap=0 00:10:06.077 numjobs=1 00:10:06.077 00:10:06.077 verify_dump=1 00:10:06.077 verify_backlog=512 00:10:06.077 verify_state_save=0 00:10:06.077 do_verify=1 00:10:06.077 verify=crc32c-intel 00:10:06.077 [job0] 00:10:06.077 filename=/dev/nvme0n1 00:10:06.077 [job1] 00:10:06.077 filename=/dev/nvme0n2 00:10:06.077 [job2] 00:10:06.077 filename=/dev/nvme0n3 00:10:06.077 [job3] 00:10:06.077 filename=/dev/nvme0n4 00:10:06.077 Could not set queue depth (nvme0n1) 00:10:06.077 Could not set queue depth (nvme0n2) 00:10:06.077 Could not set queue depth (nvme0n3) 00:10:06.077 Could not set queue depth (nvme0n4) 00:10:06.077 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.077 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.077 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.077 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.077 fio-3.35 00:10:06.077 Starting 4 threads 00:10:07.012 00:10:07.012 job0: (groupid=0, jobs=1): err= 0: pid=68532: Mon Jul 15 16:24:52 2024 00:10:07.012 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:07.012 slat (nsec): min=13104, max=45214, avg=15677.30, stdev=2344.17 00:10:07.012 clat (usec): min=198, max=326, avg=231.70, stdev=15.85 00:10:07.012 lat (usec): min=213, max=340, avg=247.38, stdev=16.36 00:10:07.012 clat percentiles (usec): 00:10:07.012 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:10:07.012 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:10:07.012 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:10:07.012 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 322], 00:10:07.012 | 99.99th=[ 326] 00:10:07.012 write: IOPS=2405, BW=9622KiB/s (9853kB/s)(9632KiB/1001msec); 0 zone resets 00:10:07.012 slat (nsec): min=14592, max=89940, avg=21974.86, stdev=4617.34 00:10:07.012 clat (usec): min=145, max=1631, avg=179.44, stdev=33.46 00:10:07.012 lat (usec): min=165, max=1655, avg=201.42, stdev=34.14 00:10:07.012 clat percentiles (usec): 00:10:07.012 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:10:07.012 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:07.012 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:10:07.012 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 277], 99.95th=[ 302], 00:10:07.012 | 99.99th=[ 1631] 00:10:07.012 bw ( KiB/s): min= 9424, max= 9424, per=26.46%, avg=9424.00, stdev= 0.00, samples=1 00:10:07.012 iops : min= 2356, max= 2356, avg=2356.00, stdev= 0.00, samples=1 00:10:07.012 lat (usec) : 250=94.79%, 500=5.18% 00:10:07.012 lat (msec) : 2=0.02% 00:10:07.012 cpu : usr=1.80%, sys=7.30%, ctx=4456, majf=0, minf=6 00:10:07.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.012 issued rwts: total=2048,2408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.012 job1: (groupid=0, jobs=1): err= 0: pid=68533: Mon Jul 15 16:24:52 2024 00:10:07.012 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:07.012 slat (nsec): min=9487, max=32374, avg=11574.59, stdev=1876.62 00:10:07.012 clat (usec): min=202, max=336, avg=236.24, stdev=16.24 00:10:07.012 lat (usec): min=213, max=347, avg=247.81, stdev=16.49 00:10:07.012 clat percentiles (usec): 00:10:07.012 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 223], 00:10:07.012 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:10:07.012 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:10:07.012 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 326], 99.95th=[ 330], 00:10:07.012 | 99.99th=[ 338] 00:10:07.012 write: IOPS=2407, BW=9630KiB/s (9861kB/s)(9640KiB/1001msec); 0 zone resets 00:10:07.012 slat (nsec): min=11648, max=68633, avg=17662.36, stdev=3888.75 00:10:07.012 clat (usec): min=101, max=1550, avg=184.01, stdev=32.35 00:10:07.012 lat (usec): min=123, max=1566, avg=201.68, stdev=32.68 00:10:07.012 clat percentiles (usec): 00:10:07.012 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 172], 00:10:07.012 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:10:07.012 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:10:07.012 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 281], 99.95th=[ 289], 00:10:07.012 | 99.99th=[ 1549] 00:10:07.012 bw ( KiB/s): min= 9424, max= 9424, per=26.46%, avg=9424.00, stdev= 0.00, samples=1 00:10:07.012 iops : min= 2356, max= 2356, avg=2356.00, stdev= 0.00, samples=1 00:10:07.012 lat (usec) : 250=92.04%, 500=7.94% 00:10:07.012 lat (msec) : 2=0.02% 00:10:07.012 cpu : usr=1.40%, sys=5.60%, ctx=4459, majf=0, minf=5 00:10:07.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.012 issued rwts: total=2048,2410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.012 job2: (groupid=0, jobs=1): err= 0: pid=68534: Mon Jul 15 16:24:52 2024 00:10:07.012 read: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec) 00:10:07.012 slat (nsec): min=12675, max=71096, avg=18901.18, stdev=6389.50 00:10:07.012 clat (usec): min=156, max=2321, avg=315.70, stdev=87.28 00:10:07.012 lat (usec): min=172, max=2347, avg=334.60, stdev=89.79 00:10:07.012 clat percentiles (usec): 00:10:07.012 | 1.00th=[ 235], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:10:07.012 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:10:07.012 | 70.00th=[ 330], 80.00th=[ 375], 90.00th=[ 420], 95.00th=[ 482], 00:10:07.012 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 758], 99.95th=[ 2311], 00:10:07.012 | 99.99th=[ 2311] 00:10:07.013 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:07.013 slat (usec): min=18, max=216, avg=25.44, stdev= 8.78 00:10:07.013 clat (usec): min=110, max=744, avg=192.80, stdev=60.14 00:10:07.013 lat (usec): min=131, max=763, avg=218.24, stdev=65.13 00:10:07.013 clat percentiles (usec): 00:10:07.013 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 137], 00:10:07.013 | 30.00th=[ 147], 40.00th=[ 178], 50.00th=[ 190], 60.00th=[ 200], 00:10:07.013 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 285], 95.00th=[ 334], 00:10:07.013 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 392], 99.95th=[ 408], 00:10:07.013 | 99.99th=[ 742] 00:10:07.013 bw ( KiB/s): min= 8192, max= 8192, per=23.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:07.013 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:07.013 lat (usec) : 250=50.84%, 500=48.23%, 750=0.87%, 1000=0.03% 00:10:07.013 lat (msec) : 4=0.03% 00:10:07.013 cpu : usr=1.90%, sys=6.50%, ctx=3679, majf=0, minf=11 00:10:07.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.013 issued rwts: total=1628,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.013 job3: (groupid=0, jobs=1): err= 0: pid=68535: Mon Jul 15 16:24:52 2024 00:10:07.013 read: IOPS=1752, BW=7009KiB/s (7177kB/s)(7016KiB/1001msec) 00:10:07.013 slat (nsec): min=12501, max=57354, avg=18374.86, stdev=5224.76 00:10:07.013 clat (usec): min=164, max=691, avg=309.76, stdev=87.60 00:10:07.013 lat (usec): min=182, max=712, avg=328.14, stdev=89.08 00:10:07.013 clat percentiles (usec): 00:10:07.013 | 1.00th=[ 182], 5.00th=[ 227], 10.00th=[ 247], 20.00th=[ 258], 00:10:07.013 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:07.013 | 70.00th=[ 302], 80.00th=[ 367], 90.00th=[ 486], 95.00th=[ 502], 00:10:07.013 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 668], 99.95th=[ 693], 00:10:07.013 | 99.99th=[ 693] 00:10:07.013 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:07.013 slat (nsec): min=16582, max=97550, avg=24236.96, stdev=6389.98 00:10:07.013 clat (usec): min=103, max=329, avg=179.15, stdev=38.58 00:10:07.013 lat (usec): min=124, max=427, avg=203.39, stdev=39.75 00:10:07.013 clat percentiles (usec): 00:10:07.013 | 1.00th=[ 109], 5.00th=[ 117], 10.00th=[ 124], 20.00th=[ 135], 00:10:07.013 | 30.00th=[ 151], 40.00th=[ 180], 50.00th=[ 190], 60.00th=[ 198], 00:10:07.013 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 235], 00:10:07.013 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 277], 00:10:07.013 | 99.99th=[ 330] 00:10:07.013 bw ( KiB/s): min= 8192, max= 8192, per=23.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:07.013 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:07.013 lat (usec) : 250=58.81%, 500=38.85%, 750=2.34% 00:10:07.013 cpu : usr=1.50%, sys=6.80%, ctx=3802, majf=0, minf=13 00:10:07.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.013 issued rwts: total=1754,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.013 00:10:07.013 Run status group 0 (all jobs): 00:10:07.013 READ: bw=29.2MiB/s (30.6MB/s), 6505KiB/s-8184KiB/s (6662kB/s-8380kB/s), io=29.2MiB (30.6MB), run=1001-1001msec 00:10:07.013 WRITE: bw=34.8MiB/s (36.5MB/s), 8184KiB/s-9630KiB/s (8380kB/s-9861kB/s), io=34.8MiB (36.5MB), run=1001-1001msec 00:10:07.013 00:10:07.013 Disk stats (read/write): 00:10:07.013 nvme0n1: ios=1840/2048, merge=0/0, ticks=441/394, in_queue=835, util=88.18% 00:10:07.013 nvme0n2: ios=1837/2048, merge=0/0, ticks=410/325, in_queue=735, util=88.87% 00:10:07.013 nvme0n3: ios=1536/1537, merge=0/0, ticks=487/311, in_queue=798, util=89.27% 00:10:07.013 nvme0n4: ios=1536/1807, merge=0/0, ticks=465/350, in_queue=815, util=89.82% 00:10:07.013 16:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:07.013 [global] 00:10:07.013 thread=1 00:10:07.013 invalidate=1 00:10:07.013 rw=randwrite 00:10:07.013 time_based=1 00:10:07.013 runtime=1 00:10:07.013 ioengine=libaio 00:10:07.013 direct=1 00:10:07.013 bs=4096 00:10:07.013 iodepth=1 00:10:07.013 norandommap=0 00:10:07.013 numjobs=1 00:10:07.013 00:10:07.013 verify_dump=1 00:10:07.013 verify_backlog=512 00:10:07.013 verify_state_save=0 00:10:07.013 do_verify=1 00:10:07.013 verify=crc32c-intel 00:10:07.013 [job0] 00:10:07.013 filename=/dev/nvme0n1 00:10:07.013 [job1] 00:10:07.013 filename=/dev/nvme0n2 00:10:07.013 [job2] 00:10:07.013 filename=/dev/nvme0n3 00:10:07.013 [job3] 00:10:07.013 filename=/dev/nvme0n4 00:10:07.013 Could not set queue depth (nvme0n1) 00:10:07.013 Could not set queue depth (nvme0n2) 00:10:07.013 Could not set queue depth (nvme0n3) 00:10:07.013 Could not set queue depth (nvme0n4) 00:10:07.272 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.272 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.272 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.272 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.272 fio-3.35 00:10:07.272 Starting 4 threads 00:10:08.648 00:10:08.648 job0: (groupid=0, jobs=1): err= 0: pid=68588: Mon Jul 15 16:24:53 2024 00:10:08.648 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.648 slat (nsec): min=12599, max=46149, avg=16325.95, stdev=2997.73 00:10:08.648 clat (usec): min=142, max=901, avg=257.53, stdev=45.54 00:10:08.648 lat (usec): min=159, max=919, avg=273.85, stdev=46.34 00:10:08.648 clat percentiles (usec): 00:10:08.648 | 1.00th=[ 186], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:10:08.648 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:10:08.648 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 322], 00:10:08.648 | 99.00th=[ 437], 99.50th=[ 461], 99.90th=[ 824], 99.95th=[ 873], 00:10:08.648 | 99.99th=[ 906] 00:10:08.648 write: IOPS=2071, BW=8288KiB/s (8487kB/s)(8296KiB/1001msec); 0 zone resets 00:10:08.648 slat (usec): min=17, max=107, avg=22.50, stdev= 4.84 00:10:08.648 clat (usec): min=103, max=562, avg=185.37, stdev=32.98 00:10:08.648 lat (usec): min=127, max=596, avg=207.87, stdev=34.71 00:10:08.648 clat percentiles (usec): 00:10:08.648 | 1.00th=[ 118], 5.00th=[ 139], 10.00th=[ 165], 20.00th=[ 172], 00:10:08.648 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:08.648 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 221], 00:10:08.648 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 537], 00:10:08.648 | 99.99th=[ 562] 00:10:08.648 bw ( KiB/s): min= 8192, max= 8192, per=24.71%, avg=8192.00, stdev= 0.00, samples=1 00:10:08.648 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:08.648 lat (usec) : 250=71.96%, 500=27.78%, 750=0.15%, 1000=0.12% 00:10:08.648 cpu : usr=1.90%, sys=6.10%, ctx=4129, majf=0, minf=17 00:10:08.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.648 issued rwts: total=2048,2074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.648 job1: (groupid=0, jobs=1): err= 0: pid=68589: Mon Jul 15 16:24:53 2024 00:10:08.648 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:08.648 slat (nsec): min=12932, max=45094, avg=16125.53, stdev=3288.05 00:10:08.648 clat (usec): min=146, max=2523, avg=258.29, stdev=66.44 00:10:08.648 lat (usec): min=159, max=2540, avg=274.42, stdev=66.83 00:10:08.648 clat percentiles (usec): 00:10:08.648 | 1.00th=[ 182], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:10:08.648 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:10:08.648 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 326], 00:10:08.648 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 873], 99.95th=[ 1123], 00:10:08.648 | 99.99th=[ 2540] 00:10:08.648 write: IOPS=2133, BW=8535KiB/s (8740kB/s)(8544KiB/1001msec); 0 zone resets 00:10:08.648 slat (usec): min=14, max=109, avg=22.57, stdev= 4.55 00:10:08.648 clat (usec): min=96, max=461, avg=178.82, stdev=24.98 00:10:08.648 lat (usec): min=116, max=480, avg=201.40, stdev=25.75 00:10:08.648 clat percentiles (usec): 00:10:08.648 | 1.00th=[ 102], 5.00th=[ 120], 10.00th=[ 161], 20.00th=[ 169], 00:10:08.648 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:10:08.648 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:10:08.648 | 99.00th=[ 227], 99.50th=[ 247], 99.90th=[ 351], 99.95th=[ 441], 00:10:08.648 | 99.99th=[ 461] 00:10:08.648 bw ( KiB/s): min= 8456, max= 8456, per=25.50%, avg=8456.00, stdev= 0.00, samples=1 00:10:08.648 iops : min= 2114, max= 2114, avg=2114.00, stdev= 0.00, samples=1 00:10:08.648 lat (usec) : 100=0.36%, 250=73.06%, 500=26.51%, 1000=0.02% 00:10:08.648 lat (msec) : 2=0.02%, 4=0.02% 00:10:08.648 cpu : usr=1.90%, sys=6.20%, ctx=4184, majf=0, minf=5 00:10:08.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.648 issued rwts: total=2048,2136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.648 job2: (groupid=0, jobs=1): err= 0: pid=68590: Mon Jul 15 16:24:53 2024 00:10:08.648 read: IOPS=1850, BW=7401KiB/s (7578kB/s)(7408KiB/1001msec) 00:10:08.648 slat (nsec): min=12147, max=43050, avg=15005.56, stdev=4626.53 00:10:08.648 clat (usec): min=163, max=548, avg=271.24, stdev=32.59 00:10:08.648 lat (usec): min=185, max=565, avg=286.25, stdev=35.45 00:10:08.648 clat percentiles (usec): 00:10:08.648 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 253], 00:10:08.648 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:10:08.648 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 314], 00:10:08.648 | 99.00th=[ 416], 99.50th=[ 424], 99.90th=[ 490], 99.95th=[ 553], 00:10:08.648 | 99.99th=[ 553] 00:10:08.648 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:08.648 slat (nsec): min=17593, max=84130, avg=20807.56, stdev=4338.32 00:10:08.648 clat (usec): min=111, max=726, avg=205.22, stdev=22.10 00:10:08.648 lat (usec): min=130, max=745, avg=226.03, stdev=23.08 00:10:08.648 clat percentiles (usec): 00:10:08.648 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:10:08.648 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:10:08.648 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 229], 00:10:08.648 | 99.00th=[ 243], 99.50th=[ 297], 99.90th=[ 338], 99.95th=[ 685], 00:10:08.648 | 99.99th=[ 725] 00:10:08.648 bw ( KiB/s): min= 8192, max= 8192, per=24.71%, avg=8192.00, stdev= 0.00, samples=1 00:10:08.648 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:08.648 lat (usec) : 250=57.82%, 500=42.10%, 750=0.08% 00:10:08.648 cpu : usr=1.40%, sys=5.80%, ctx=3900, majf=0, minf=12 00:10:08.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.648 issued rwts: total=1852,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.648 job3: (groupid=0, jobs=1): err= 0: pid=68592: Mon Jul 15 16:24:53 2024 00:10:08.648 read: IOPS=1913, BW=7653KiB/s (7836kB/s)(7668KiB/1002msec) 00:10:08.648 slat (nsec): min=12242, max=35868, avg=14974.12, stdev=2380.76 00:10:08.648 clat (usec): min=173, max=1987, avg=262.39, stdev=50.28 00:10:08.648 lat (usec): min=188, max=2003, avg=277.36, stdev=50.43 00:10:08.648 clat percentiles (usec): 00:10:08.648 | 1.00th=[ 184], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 251], 00:10:08.648 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:10:08.648 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:10:08.648 | 99.00th=[ 318], 99.50th=[ 343], 99.90th=[ 1270], 99.95th=[ 1991], 00:10:08.648 | 99.99th=[ 1991] 00:10:08.648 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:10:08.649 slat (nsec): min=17651, max=87650, avg=21859.50, stdev=4233.32 00:10:08.649 clat (usec): min=106, max=751, avg=203.94, stdev=23.91 00:10:08.649 lat (usec): min=127, max=768, avg=225.80, stdev=24.76 00:10:08.649 clat percentiles (usec): 00:10:08.649 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:10:08.649 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:10:08.649 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 229], 00:10:08.649 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 586], 99.95th=[ 668], 00:10:08.649 | 99.99th=[ 750] 00:10:08.649 bw ( KiB/s): min= 8192, max= 8192, per=24.71%, avg=8192.00, stdev= 0.00, samples=2 00:10:08.649 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:08.649 lat (usec) : 250=60.30%, 500=39.57%, 750=0.05%, 1000=0.03% 00:10:08.649 lat (msec) : 2=0.05% 00:10:08.649 cpu : usr=1.70%, sys=5.79%, ctx=3966, majf=0, minf=11 00:10:08.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.649 issued rwts: total=1917,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.649 00:10:08.649 Run status group 0 (all jobs): 00:10:08.649 READ: bw=30.7MiB/s (32.1MB/s), 7401KiB/s-8184KiB/s (7578kB/s-8380kB/s), io=30.7MiB (32.2MB), run=1001-1002msec 00:10:08.649 WRITE: bw=32.4MiB/s (34.0MB/s), 8176KiB/s-8535KiB/s (8372kB/s-8740kB/s), io=32.4MiB (34.0MB), run=1001-1002msec 00:10:08.649 00:10:08.649 Disk stats (read/write): 00:10:08.649 nvme0n1: ios=1597/2048, merge=0/0, ticks=427/404, in_queue=831, util=88.05% 00:10:08.649 nvme0n2: ios=1604/2048, merge=0/0, ticks=451/385, in_queue=836, util=88.22% 00:10:08.649 nvme0n3: ios=1536/1873, merge=0/0, ticks=411/400, in_queue=811, util=89.12% 00:10:08.649 nvme0n4: ios=1536/1874, merge=0/0, ticks=420/394, in_queue=814, util=89.68% 00:10:08.649 16:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:08.649 [global] 00:10:08.649 thread=1 00:10:08.649 invalidate=1 00:10:08.649 rw=write 00:10:08.649 time_based=1 00:10:08.649 runtime=1 00:10:08.649 ioengine=libaio 00:10:08.649 direct=1 00:10:08.649 bs=4096 00:10:08.649 iodepth=128 00:10:08.649 norandommap=0 00:10:08.649 numjobs=1 00:10:08.649 00:10:08.649 verify_dump=1 00:10:08.649 verify_backlog=512 00:10:08.649 verify_state_save=0 00:10:08.649 do_verify=1 00:10:08.649 verify=crc32c-intel 00:10:08.649 [job0] 00:10:08.649 filename=/dev/nvme0n1 00:10:08.649 [job1] 00:10:08.649 filename=/dev/nvme0n2 00:10:08.649 [job2] 00:10:08.649 filename=/dev/nvme0n3 00:10:08.649 [job3] 00:10:08.649 filename=/dev/nvme0n4 00:10:08.649 Could not set queue depth (nvme0n1) 00:10:08.649 Could not set queue depth (nvme0n2) 00:10:08.649 Could not set queue depth (nvme0n3) 00:10:08.649 Could not set queue depth (nvme0n4) 00:10:08.649 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.649 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.649 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.649 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.649 fio-3.35 00:10:08.649 Starting 4 threads 00:10:10.022 00:10:10.022 job0: (groupid=0, jobs=1): err= 0: pid=68656: Mon Jul 15 16:24:55 2024 00:10:10.022 read: IOPS=5640, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1003msec) 00:10:10.022 slat (usec): min=3, max=4483, avg=83.76, stdev=352.76 00:10:10.022 clat (usec): min=469, max=15698, avg=11114.09, stdev=989.54 00:10:10.022 lat (usec): min=4246, max=15728, avg=11197.86, stdev=981.82 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:10:10.022 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11207], 00:10:10.022 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12518], 00:10:10.022 | 99.00th=[13829], 99.50th=[14222], 99.90th=[15139], 99.95th=[15270], 00:10:10.022 | 99.99th=[15664] 00:10:10.022 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:10.022 slat (usec): min=9, max=4218, avg=78.28, stdev=418.59 00:10:10.022 clat (usec): min=4614, max=15625, avg=10415.13, stdev=1090.81 00:10:10.022 lat (usec): min=4631, max=15643, avg=10493.41, stdev=1156.53 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10028], 00:10:10.022 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:10:10.022 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11863], 00:10:10.022 | 99.00th=[13960], 99.50th=[14615], 99.90th=[15139], 99.95th=[15139], 00:10:10.022 | 99.99th=[15664] 00:10:10.022 bw ( KiB/s): min=23752, max=24576, per=36.17%, avg=24164.00, stdev=582.66, samples=2 00:10:10.022 iops : min= 5938, max= 6144, avg=6041.00, stdev=145.66, samples=2 00:10:10.022 lat (usec) : 500=0.01% 00:10:10.022 lat (msec) : 10=15.11%, 20=84.88% 00:10:10.022 cpu : usr=5.49%, sys=15.67%, ctx=417, majf=0, minf=9 00:10:10.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:10.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.022 issued rwts: total=5657,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.022 job1: (groupid=0, jobs=1): err= 0: pid=68657: Mon Jul 15 16:24:55 2024 00:10:10.022 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:10:10.022 slat (usec): min=4, max=11866, avg=194.51, stdev=785.31 00:10:10.022 clat (usec): min=14972, max=35519, avg=24380.28, stdev=3101.36 00:10:10.022 lat (usec): min=15838, max=35542, avg=24574.79, stdev=3111.06 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[17695], 5.00th=[19792], 10.00th=[20841], 20.00th=[22938], 00:10:10.022 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24511], 00:10:10.022 | 70.00th=[25035], 80.00th=[25560], 90.00th=[27657], 95.00th=[31589], 00:10:10.022 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:10:10.022 | 99.99th=[35390] 00:10:10.022 write: IOPS=2788, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1013msec); 0 zone resets 00:10:10.022 slat (usec): min=5, max=8884, avg=170.04, stdev=692.59 00:10:10.022 clat (usec): min=9801, max=36603, avg=23249.52, stdev=4339.78 00:10:10.022 lat (usec): min=9829, max=36622, avg=23419.56, stdev=4348.04 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[11469], 5.00th=[15401], 10.00th=[17171], 20.00th=[20841], 00:10:10.022 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[24511], 00:10:10.022 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26608], 95.00th=[32113], 00:10:10.022 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:10:10.022 | 99.99th=[36439] 00:10:10.022 bw ( KiB/s): min= 9296, max=12288, per=16.15%, avg=10792.00, stdev=2115.66, samples=2 00:10:10.022 iops : min= 2324, max= 3072, avg=2698.00, stdev=528.92, samples=2 00:10:10.022 lat (msec) : 10=0.07%, 20=12.48%, 50=87.45% 00:10:10.022 cpu : usr=2.96%, sys=7.41%, ctx=833, majf=0, minf=7 00:10:10.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:10.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.022 issued rwts: total=2560,2825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.022 job2: (groupid=0, jobs=1): err= 0: pid=68658: Mon Jul 15 16:24:55 2024 00:10:10.022 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:10.022 slat (usec): min=5, max=3114, avg=94.11, stdev=442.41 00:10:10.022 clat (usec): min=8794, max=13904, avg=12599.26, stdev=612.26 00:10:10.022 lat (usec): min=8812, max=13922, avg=12693.37, stdev=434.95 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12256], 20.00th=[12387], 00:10:10.022 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:10:10.022 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:10:10.022 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:10:10.022 | 99.99th=[13960] 00:10:10.022 write: IOPS=5169, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1003msec); 0 zone resets 00:10:10.022 slat (usec): min=9, max=2775, avg=92.07, stdev=384.91 00:10:10.022 clat (usec): min=251, max=13263, avg=11983.44, stdev=1020.94 00:10:10.022 lat (usec): min=2294, max=13299, avg=12075.51, stdev=942.98 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[ 5866], 5.00th=[11338], 10.00th=[11731], 20.00th=[11863], 00:10:10.022 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:10:10.022 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:10:10.022 | 99.00th=[12911], 99.50th=[12911], 99.90th=[13042], 99.95th=[13173], 00:10:10.022 | 99.99th=[13304] 00:10:10.022 bw ( KiB/s): min=20439, max=20480, per=30.62%, avg=20459.50, stdev=28.99, samples=2 00:10:10.022 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:10:10.022 lat (usec) : 500=0.01% 00:10:10.022 lat (msec) : 4=0.31%, 10=2.02%, 20=97.66% 00:10:10.022 cpu : usr=4.99%, sys=14.17%, ctx=324, majf=0, minf=8 00:10:10.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:10.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.022 issued rwts: total=5120,5185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.022 job3: (groupid=0, jobs=1): err= 0: pid=68659: Mon Jul 15 16:24:55 2024 00:10:10.022 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:10:10.022 slat (usec): min=5, max=9869, avg=193.70, stdev=743.67 00:10:10.022 clat (usec): min=13781, max=34761, avg=24095.19, stdev=3283.23 00:10:10.022 lat (usec): min=13794, max=34781, avg=24288.89, stdev=3291.18 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[15270], 5.00th=[17957], 10.00th=[20317], 20.00th=[22676], 00:10:10.022 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:10:10.022 | 70.00th=[24773], 80.00th=[25560], 90.00th=[27919], 95.00th=[30802], 00:10:10.022 | 99.00th=[32900], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:10:10.022 | 99.99th=[34866] 00:10:10.022 write: IOPS=2738, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1010msec); 0 zone resets 00:10:10.022 slat (usec): min=5, max=6403, avg=174.70, stdev=685.06 00:10:10.022 clat (usec): min=8210, max=35944, avg=23550.01, stdev=3642.55 00:10:10.022 lat (usec): min=9399, max=35975, avg=23724.71, stdev=3640.05 00:10:10.022 clat percentiles (usec): 00:10:10.022 | 1.00th=[13435], 5.00th=[16909], 10.00th=[18744], 20.00th=[22152], 00:10:10.022 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23987], 60.00th=[24511], 00:10:10.022 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26084], 95.00th=[30802], 00:10:10.022 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35390], 99.95th=[35914], 00:10:10.022 | 99.99th=[35914] 00:10:10.022 bw ( KiB/s): min= 8936, max=12176, per=15.80%, avg=10556.00, stdev=2291.03, samples=2 00:10:10.022 iops : min= 2234, max= 3044, avg=2639.00, stdev=572.76, samples=2 00:10:10.022 lat (msec) : 10=0.23%, 20=11.68%, 50=88.10% 00:10:10.022 cpu : usr=3.17%, sys=7.23%, ctx=790, majf=0, minf=15 00:10:10.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:10.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.023 issued rwts: total=2560,2766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.023 00:10:10.023 Run status group 0 (all jobs): 00:10:10.023 READ: bw=61.3MiB/s (64.3MB/s), 9.87MiB/s-22.0MiB/s (10.4MB/s-23.1MB/s), io=62.1MiB (65.1MB), run=1003-1013msec 00:10:10.023 WRITE: bw=65.2MiB/s (68.4MB/s), 10.7MiB/s-23.9MiB/s (11.2MB/s-25.1MB/s), io=66.1MiB (69.3MB), run=1003-1013msec 00:10:10.023 00:10:10.023 Disk stats (read/write): 00:10:10.023 nvme0n1: ios=5041/5120, merge=0/0, ticks=26910/22085, in_queue=48995, util=88.47% 00:10:10.023 nvme0n2: ios=2096/2552, merge=0/0, ticks=24786/27194, in_queue=51980, util=89.37% 00:10:10.023 nvme0n3: ios=4337/4608, merge=0/0, ticks=12268/11737, in_queue=24005, util=89.61% 00:10:10.023 nvme0n4: ios=2048/2490, merge=0/0, ticks=24350/26999, in_queue=51349, util=88.41% 00:10:10.023 16:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:10.023 [global] 00:10:10.023 thread=1 00:10:10.023 invalidate=1 00:10:10.023 rw=randwrite 00:10:10.023 time_based=1 00:10:10.023 runtime=1 00:10:10.023 ioengine=libaio 00:10:10.023 direct=1 00:10:10.023 bs=4096 00:10:10.023 iodepth=128 00:10:10.023 norandommap=0 00:10:10.023 numjobs=1 00:10:10.023 00:10:10.023 verify_dump=1 00:10:10.023 verify_backlog=512 00:10:10.023 verify_state_save=0 00:10:10.023 do_verify=1 00:10:10.023 verify=crc32c-intel 00:10:10.023 [job0] 00:10:10.023 filename=/dev/nvme0n1 00:10:10.023 [job1] 00:10:10.023 filename=/dev/nvme0n2 00:10:10.023 [job2] 00:10:10.023 filename=/dev/nvme0n3 00:10:10.023 [job3] 00:10:10.023 filename=/dev/nvme0n4 00:10:10.023 Could not set queue depth (nvme0n1) 00:10:10.023 Could not set queue depth (nvme0n2) 00:10:10.023 Could not set queue depth (nvme0n3) 00:10:10.023 Could not set queue depth (nvme0n4) 00:10:10.023 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.023 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.023 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.023 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.023 fio-3.35 00:10:10.023 Starting 4 threads 00:10:11.398 00:10:11.398 job0: (groupid=0, jobs=1): err= 0: pid=68713: Mon Jul 15 16:24:56 2024 00:10:11.398 read: IOPS=5736, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec) 00:10:11.398 slat (usec): min=7, max=5443, avg=81.91, stdev=450.37 00:10:11.398 clat (usec): min=1507, max=19357, avg=11201.53, stdev=1211.42 00:10:11.398 lat (usec): min=3857, max=24721, avg=11283.45, stdev=1224.87 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[ 7439], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[10814], 00:10:11.398 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:11.398 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:10:11.398 | 99.00th=[15533], 99.50th=[17171], 99.90th=[19268], 99.95th=[19268], 00:10:11.398 | 99.99th=[19268] 00:10:11.398 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:10:11.398 slat (usec): min=10, max=7062, avg=78.88, stdev=445.42 00:10:11.398 clat (usec): min=5459, max=16569, avg=10188.24, stdev=926.06 00:10:11.398 lat (usec): min=7515, max=16818, avg=10267.12, stdev=832.09 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:10:11.398 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:10:11.398 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:10:11.398 | 99.00th=[12649], 99.50th=[12780], 99.90th=[16450], 99.95th=[16450], 00:10:11.398 | 99.99th=[16581] 00:10:11.398 bw ( KiB/s): min=24568, max=24576, per=36.00%, avg=24572.00, stdev= 5.66, samples=2 00:10:11.398 iops : min= 6142, max= 6144, avg=6143.00, stdev= 1.41, samples=2 00:10:11.398 lat (msec) : 2=0.01%, 4=0.08%, 10=21.94%, 20=77.97% 00:10:11.398 cpu : usr=5.88%, sys=15.15%, ctx=277, majf=0, minf=13 00:10:11.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.398 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.398 job1: (groupid=0, jobs=1): err= 0: pid=68714: Mon Jul 15 16:24:56 2024 00:10:11.398 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:10:11.398 slat (usec): min=6, max=8533, avg=197.65, stdev=771.01 00:10:11.398 clat (usec): min=16879, max=37660, avg=24814.82, stdev=3050.68 00:10:11.398 lat (usec): min=16892, max=37686, avg=25012.47, stdev=3064.04 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[17695], 5.00th=[20317], 10.00th=[21365], 20.00th=[22938], 00:10:11.398 | 30.00th=[23462], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:10:11.398 | 70.00th=[25297], 80.00th=[27132], 90.00th=[28967], 95.00th=[31589], 00:10:11.398 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:10:11.398 | 99.99th=[37487] 00:10:11.398 write: IOPS=2798, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1009msec); 0 zone resets 00:10:11.398 slat (usec): min=5, max=14074, avg=167.61, stdev=785.54 00:10:11.398 clat (usec): min=8156, max=34869, avg=22251.54, stdev=4044.78 00:10:11.398 lat (usec): min=8676, max=34884, avg=22419.15, stdev=4036.56 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[11863], 5.00th=[15664], 10.00th=[16909], 20.00th=[19530], 00:10:11.398 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22676], 60.00th=[22938], 00:10:11.398 | 70.00th=[23987], 80.00th=[24773], 90.00th=[26346], 95.00th=[29754], 00:10:11.398 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:10:11.398 | 99.99th=[34866] 00:10:11.398 bw ( KiB/s): min= 9320, max=12256, per=15.81%, avg=10788.00, stdev=2076.07, samples=2 00:10:11.398 iops : min= 2330, max= 3064, avg=2697.00, stdev=519.02, samples=2 00:10:11.398 lat (msec) : 10=0.32%, 20=12.67%, 50=87.02% 00:10:11.398 cpu : usr=2.38%, sys=8.04%, ctx=655, majf=0, minf=21 00:10:11.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.398 issued rwts: total=2560,2824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.398 job2: (groupid=0, jobs=1): err= 0: pid=68715: Mon Jul 15 16:24:56 2024 00:10:11.398 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:11.398 slat (usec): min=4, max=5961, avg=92.18, stdev=529.64 00:10:11.398 clat (usec): min=7302, max=20213, avg=12881.31, stdev=1293.46 00:10:11.398 lat (usec): min=7324, max=24314, avg=12973.48, stdev=1310.91 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[ 8455], 5.00th=[11338], 10.00th=[11994], 20.00th=[12387], 00:10:11.398 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:10:11.398 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13960], 95.00th=[14484], 00:10:11.398 | 99.00th=[19006], 99.50th=[19268], 99.90th=[20317], 99.95th=[20317], 00:10:11.398 | 99.99th=[20317] 00:10:11.398 write: IOPS=5310, BW=20.7MiB/s (21.8MB/s)(20.8MiB/1004msec); 0 zone resets 00:10:11.398 slat (usec): min=10, max=8429, avg=91.28, stdev=532.33 00:10:11.398 clat (usec): min=3067, max=19166, avg=11457.50, stdev=1445.13 00:10:11.398 lat (usec): min=3097, max=19198, avg=11548.77, stdev=1374.35 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[ 4293], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10945], 00:10:11.398 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:10:11.398 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:10:11.398 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15401], 99.95th=[15401], 00:10:11.398 | 99.99th=[19268] 00:10:11.398 bw ( KiB/s): min=20480, max=21160, per=30.50%, avg=20820.00, stdev=480.83, samples=2 00:10:11.398 iops : min= 5120, max= 5290, avg=5205.00, stdev=120.21, samples=2 00:10:11.398 lat (msec) : 4=0.29%, 10=6.45%, 20=93.11%, 50=0.15% 00:10:11.398 cpu : usr=4.19%, sys=14.76%, ctx=263, majf=0, minf=5 00:10:11.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.398 issued rwts: total=5120,5332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.398 job3: (groupid=0, jobs=1): err= 0: pid=68716: Mon Jul 15 16:24:56 2024 00:10:11.398 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:10:11.398 slat (usec): min=5, max=8376, avg=192.38, stdev=755.96 00:10:11.398 clat (usec): min=15380, max=37100, avg=24907.30, stdev=3342.58 00:10:11.398 lat (usec): min=15409, max=38510, avg=25099.67, stdev=3345.31 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[18482], 5.00th=[20055], 10.00th=[21103], 20.00th=[22938], 00:10:11.398 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:10:11.398 | 70.00th=[25297], 80.00th=[26870], 90.00th=[29754], 95.00th=[32375], 00:10:11.398 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:10:11.398 | 99.99th=[36963] 00:10:11.398 write: IOPS=2932, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1012msec); 0 zone resets 00:10:11.398 slat (usec): min=6, max=11889, avg=163.90, stdev=761.29 00:10:11.398 clat (usec): min=9544, max=35291, avg=21719.81, stdev=4714.42 00:10:11.398 lat (usec): min=9702, max=37771, avg=21883.71, stdev=4751.17 00:10:11.398 clat percentiles (usec): 00:10:11.398 | 1.00th=[10945], 5.00th=[13173], 10.00th=[14484], 20.00th=[17171], 00:10:11.398 | 30.00th=[20317], 40.00th=[21890], 50.00th=[22414], 60.00th=[22938], 00:10:11.398 | 70.00th=[23725], 80.00th=[25035], 90.00th=[27657], 95.00th=[29230], 00:10:11.398 | 99.00th=[31589], 99.50th=[33424], 99.90th=[34866], 99.95th=[34866], 00:10:11.398 | 99.99th=[35390] 00:10:11.398 bw ( KiB/s): min=10440, max=12263, per=16.63%, avg=11351.50, stdev=1289.06, samples=2 00:10:11.398 iops : min= 2610, max= 3065, avg=2837.50, stdev=321.73, samples=2 00:10:11.398 lat (msec) : 10=0.11%, 20=17.66%, 50=82.24% 00:10:11.398 cpu : usr=1.78%, sys=8.51%, ctx=686, majf=0, minf=7 00:10:11.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.398 issued rwts: total=2560,2968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.398 00:10:11.398 Run status group 0 (all jobs): 00:10:11.398 READ: bw=61.8MiB/s (64.8MB/s), 9.88MiB/s-22.4MiB/s (10.4MB/s-23.5MB/s), io=62.5MiB (65.5MB), run=1004-1012msec 00:10:11.398 WRITE: bw=66.7MiB/s (69.9MB/s), 10.9MiB/s-23.9MiB/s (11.5MB/s-25.1MB/s), io=67.5MiB (70.7MB), run=1004-1012msec 00:10:11.398 00:10:11.398 Disk stats (read/write): 00:10:11.398 nvme0n1: ios=5018/5120, merge=0/0, ticks=52204/47488, in_queue=99692, util=87.66% 00:10:11.398 nvme0n2: ios=2073/2498, merge=0/0, ticks=24752/25745, in_queue=50497, util=86.48% 00:10:11.398 nvme0n3: ios=4225/4608, merge=0/0, ticks=51362/48275, in_queue=99637, util=88.90% 00:10:11.398 nvme0n4: ios=2090/2560, merge=0/0, ticks=25813/25735, in_queue=51548, util=88.82% 00:10:11.398 16:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:11.399 16:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68729 00:10:11.399 16:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:11.399 16:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:11.399 [global] 00:10:11.399 thread=1 00:10:11.399 invalidate=1 00:10:11.399 rw=read 00:10:11.399 time_based=1 00:10:11.399 runtime=10 00:10:11.399 ioengine=libaio 00:10:11.399 direct=1 00:10:11.399 bs=4096 00:10:11.399 iodepth=1 00:10:11.399 norandommap=1 00:10:11.399 numjobs=1 00:10:11.399 00:10:11.399 [job0] 00:10:11.399 filename=/dev/nvme0n1 00:10:11.399 [job1] 00:10:11.399 filename=/dev/nvme0n2 00:10:11.399 [job2] 00:10:11.399 filename=/dev/nvme0n3 00:10:11.399 [job3] 00:10:11.399 filename=/dev/nvme0n4 00:10:11.399 Could not set queue depth (nvme0n1) 00:10:11.399 Could not set queue depth (nvme0n2) 00:10:11.399 Could not set queue depth (nvme0n3) 00:10:11.399 Could not set queue depth (nvme0n4) 00:10:11.399 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.399 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.399 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.399 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.399 fio-3.35 00:10:11.399 Starting 4 threads 00:10:14.725 16:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:14.725 fio: pid=68772, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:14.725 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=44195840, buflen=4096 00:10:14.725 16:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:14.725 fio: pid=68771, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:14.725 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=58609664, buflen=4096 00:10:14.725 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.725 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:14.983 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=64311296, buflen=4096 00:10:14.983 fio: pid=68769, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:14.983 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.983 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:15.240 fio: pid=68770, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:15.240 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=61595648, buflen=4096 00:10:15.240 00:10:15.240 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68769: Mon Jul 15 16:25:00 2024 00:10:15.240 read: IOPS=4592, BW=17.9MiB/s (18.8MB/s)(61.3MiB/3419msec) 00:10:15.240 slat (usec): min=8, max=9368, avg=15.36, stdev=134.77 00:10:15.240 clat (usec): min=129, max=7929, avg=200.95, stdev=158.65 00:10:15.240 lat (usec): min=140, max=9551, avg=216.31, stdev=208.52 00:10:15.240 clat percentiles (usec): 00:10:15.240 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:15.240 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 172], 60.00th=[ 227], 00:10:15.240 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:10:15.240 | 99.00th=[ 293], 99.50th=[ 363], 99.90th=[ 1795], 99.95th=[ 4015], 00:10:15.240 | 99.99th=[ 7308] 00:10:15.240 bw ( KiB/s): min=15176, max=23360, per=30.68%, avg=18393.33, stdev=3506.72, samples=6 00:10:15.240 iops : min= 3794, max= 5840, avg=4598.33, stdev=876.68, samples=6 00:10:15.240 lat (usec) : 250=85.45%, 500=14.27%, 750=0.10%, 1000=0.04% 00:10:15.240 lat (msec) : 2=0.04%, 4=0.04%, 10=0.05% 00:10:15.240 cpu : usr=1.29%, sys=5.85%, ctx=15709, majf=0, minf=1 00:10:15.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.240 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.240 issued rwts: total=15702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.240 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68770: Mon Jul 15 16:25:00 2024 00:10:15.240 read: IOPS=4037, BW=15.8MiB/s (16.5MB/s)(58.7MiB/3725msec) 00:10:15.240 slat (usec): min=10, max=15419, avg=18.26, stdev=234.55 00:10:15.240 clat (usec): min=128, max=2600, avg=227.86, stdev=63.54 00:10:15.240 lat (usec): min=141, max=15604, avg=246.12, stdev=242.32 00:10:15.240 clat percentiles (usec): 00:10:15.240 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 157], 00:10:15.241 | 30.00th=[ 231], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:10:15.241 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:15.241 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 482], 99.95th=[ 816], 00:10:15.241 | 99.99th=[ 2573] 00:10:15.241 bw ( KiB/s): min=14432, max=22074, per=26.23%, avg=15726.00, stdev=2807.95, samples=7 00:10:15.241 iops : min= 3608, max= 5518, avg=3931.43, stdev=701.80, samples=7 00:10:15.241 lat (usec) : 250=53.54%, 500=46.35%, 750=0.04%, 1000=0.03% 00:10:15.241 lat (msec) : 2=0.01%, 4=0.02% 00:10:15.241 cpu : usr=1.40%, sys=4.97%, ctx=15046, majf=0, minf=1 00:10:15.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.241 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.241 issued rwts: total=15039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.241 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68771: Mon Jul 15 16:25:00 2024 00:10:15.241 read: IOPS=4485, BW=17.5MiB/s (18.4MB/s)(55.9MiB/3190msec) 00:10:15.241 slat (usec): min=8, max=11187, avg=13.63, stdev=114.20 00:10:15.241 clat (usec): min=133, max=1679, avg=208.06, stdev=47.39 00:10:15.241 lat (usec): min=154, max=11445, avg=221.69, stdev=124.31 00:10:15.241 clat percentiles (usec): 00:10:15.241 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:10:15.241 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 194], 60.00th=[ 233], 00:10:15.241 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:10:15.241 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[ 537], 99.95th=[ 775], 00:10:15.241 | 99.99th=[ 1012] 00:10:15.241 bw ( KiB/s): min=15192, max=21856, per=30.36%, avg=18201.33, stdev=3148.09, samples=6 00:10:15.241 iops : min= 3798, max= 5464, avg=4550.33, stdev=787.02, samples=6 00:10:15.241 lat (usec) : 250=81.78%, 500=18.09%, 750=0.07%, 1000=0.04% 00:10:15.241 lat (msec) : 2=0.01% 00:10:15.241 cpu : usr=0.97%, sys=5.33%, ctx=14314, majf=0, minf=1 00:10:15.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.241 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.241 issued rwts: total=14310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.241 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68772: Mon Jul 15 16:25:00 2024 00:10:15.241 read: IOPS=3667, BW=14.3MiB/s (15.0MB/s)(42.1MiB/2942msec) 00:10:15.241 slat (nsec): min=10984, max=79034, avg=13078.53, stdev=2017.94 00:10:15.241 clat (usec): min=151, max=1980, avg=258.02, stdev=32.79 00:10:15.241 lat (usec): min=166, max=1996, avg=271.10, stdev=32.93 00:10:15.241 clat percentiles (usec): 00:10:15.241 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:10:15.241 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:10:15.241 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:10:15.241 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 537], 99.95th=[ 652], 00:10:15.241 | 99.99th=[ 1926] 00:10:15.241 bw ( KiB/s): min=14456, max=14936, per=24.46%, avg=14667.20, stdev=244.66, samples=5 00:10:15.241 iops : min= 3614, max= 3734, avg=3666.80, stdev=61.17, samples=5 00:10:15.241 lat (usec) : 250=33.45%, 500=66.43%, 750=0.06%, 1000=0.02% 00:10:15.241 lat (msec) : 2=0.03% 00:10:15.241 cpu : usr=1.22%, sys=4.22%, ctx=10796, majf=0, minf=1 00:10:15.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.241 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.241 issued rwts: total=10791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.241 00:10:15.241 Run status group 0 (all jobs): 00:10:15.241 READ: bw=58.6MiB/s (61.4MB/s), 14.3MiB/s-17.9MiB/s (15.0MB/s-18.8MB/s), io=218MiB (229MB), run=2942-3725msec 00:10:15.241 00:10:15.241 Disk stats (read/write): 00:10:15.241 nvme0n1: ios=15398/0, merge=0/0, ticks=3033/0, in_queue=3033, util=94.74% 00:10:15.241 nvme0n2: ios=14395/0, merge=0/0, ticks=3359/0, in_queue=3359, util=95.08% 00:10:15.241 nvme0n3: ios=14041/0, merge=0/0, ticks=2810/0, in_queue=2810, util=96.21% 00:10:15.241 nvme0n4: ios=10523/0, merge=0/0, ticks=2757/0, in_queue=2757, util=96.73% 00:10:15.241 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.241 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:15.530 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.530 16:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:15.788 16:25:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:15.788 16:25:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:16.046 16:25:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.046 16:25:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:16.304 16:25:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.304 16:25:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68729 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.561 nvmf hotplug test: fio failed as expected 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:16.561 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.819 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.819 rmmod nvme_tcp 00:10:16.819 rmmod nvme_fabrics 00:10:16.819 rmmod nvme_keyring 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68342 ']' 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68342 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68342 ']' 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68342 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68342 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.077 killing process with pid 68342 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68342' 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68342 00:10:17.077 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68342 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:17.339 ************************************ 00:10:17.339 END TEST nvmf_fio_target 00:10:17.339 ************************************ 00:10:17.339 00:10:17.339 real 0m19.538s 00:10:17.339 user 1m14.229s 00:10:17.339 sys 0m9.930s 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.339 16:25:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.339 16:25:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:17.339 16:25:02 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:17.339 16:25:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:17.339 16:25:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.339 16:25:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.339 ************************************ 00:10:17.339 START TEST nvmf_bdevio 00:10:17.339 ************************************ 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:17.339 * Looking for test storage... 00:10:17.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:17.339 Cannot find device "nvmf_tgt_br" 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.339 Cannot find device "nvmf_tgt_br2" 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:17.339 Cannot find device "nvmf_tgt_br" 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:17.339 Cannot find device "nvmf_tgt_br2" 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:17.339 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:17.598 16:25:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.598 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:17.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:10:17.598 00:10:17.598 --- 10.0.0.2 ping statistics --- 00:10:17.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.599 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:17.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:10:17.599 00:10:17.599 --- 10.0.0.3 ping statistics --- 00:10:17.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.599 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:17.599 00:10:17.599 --- 10.0.0.1 ping statistics --- 00:10:17.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.599 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:17.599 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69040 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69040 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69040 ']' 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.888 16:25:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.888 [2024-07-15 16:25:03.207814] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:17.888 [2024-07-15 16:25:03.207956] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.888 [2024-07-15 16:25:03.347966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.148 [2024-07-15 16:25:03.467450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.148 [2024-07-15 16:25:03.468091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.148 [2024-07-15 16:25:03.468749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.148 [2024-07-15 16:25:03.469269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.148 [2024-07-15 16:25:03.469541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.148 [2024-07-15 16:25:03.470073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:18.148 [2024-07-15 16:25:03.470219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:18.148 [2024-07-15 16:25:03.470338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:18.148 [2024-07-15 16:25:03.470346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.148 [2024-07-15 16:25:03.527511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.715 [2024-07-15 16:25:04.163654] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.715 Malloc0 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.715 [2024-07-15 16:25:04.244434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:18.715 { 00:10:18.715 "params": { 00:10:18.715 "name": "Nvme$subsystem", 00:10:18.715 "trtype": "$TEST_TRANSPORT", 00:10:18.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.715 "adrfam": "ipv4", 00:10:18.715 "trsvcid": "$NVMF_PORT", 00:10:18.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.715 "hdgst": ${hdgst:-false}, 00:10:18.715 "ddgst": ${ddgst:-false} 00:10:18.715 }, 00:10:18.715 "method": "bdev_nvme_attach_controller" 00:10:18.715 } 00:10:18.715 EOF 00:10:18.715 )") 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:18.715 16:25:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:18.715 "params": { 00:10:18.715 "name": "Nvme1", 00:10:18.715 "trtype": "tcp", 00:10:18.715 "traddr": "10.0.0.2", 00:10:18.715 "adrfam": "ipv4", 00:10:18.715 "trsvcid": "4420", 00:10:18.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.715 "hdgst": false, 00:10:18.715 "ddgst": false 00:10:18.715 }, 00:10:18.715 "method": "bdev_nvme_attach_controller" 00:10:18.715 }' 00:10:18.973 [2024-07-15 16:25:04.302313] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:10:18.973 [2024-07-15 16:25:04.302402] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69076 ] 00:10:18.973 [2024-07-15 16:25:04.443437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.232 [2024-07-15 16:25:04.552572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.232 [2024-07-15 16:25:04.552688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.232 [2024-07-15 16:25:04.552697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.232 [2024-07-15 16:25:04.618270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:19.232 I/O targets: 00:10:19.232 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:19.232 00:10:19.232 00:10:19.232 CUnit - A unit testing framework for C - Version 2.1-3 00:10:19.232 http://cunit.sourceforge.net/ 00:10:19.232 00:10:19.232 00:10:19.232 Suite: bdevio tests on: Nvme1n1 00:10:19.232 Test: blockdev write read block ...passed 00:10:19.232 Test: blockdev write zeroes read block ...passed 00:10:19.232 Test: blockdev write zeroes read no split ...passed 00:10:19.232 Test: blockdev write zeroes read split ...passed 00:10:19.232 Test: blockdev write zeroes read split partial ...passed 00:10:19.232 Test: blockdev reset ...[2024-07-15 16:25:04.765164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:19.232 [2024-07-15 16:25:04.765577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16de7c0 (9): Bad file descriptor 00:10:19.232 [2024-07-15 16:25:04.780839] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:19.232 passed 00:10:19.232 Test: blockdev write read 8 blocks ...passed 00:10:19.491 Test: blockdev write read size > 128k ...passed 00:10:19.491 Test: blockdev write read invalid size ...passed 00:10:19.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:19.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:19.491 Test: blockdev write read max offset ...passed 00:10:19.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:19.491 Test: blockdev writev readv 8 blocks ...passed 00:10:19.491 Test: blockdev writev readv 30 x 1block ...passed 00:10:19.491 Test: blockdev writev readv block ...passed 00:10:19.491 Test: blockdev writev readv size > 128k ...passed 00:10:19.491 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:19.491 Test: blockdev comparev and writev ...[2024-07-15 16:25:04.789366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.789558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.789585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.789597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.789932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.789957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.789974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.789989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.790287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.790308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.790325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.790335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.790618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.790639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.790656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.491 [2024-07-15 16:25:04.790666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:19.491 passed 00:10:19.491 Test: blockdev nvme passthru rw ...passed 00:10:19.491 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:25:04.791534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.491 [2024-07-15 16:25:04.791558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:19.491 passed 00:10:19.491 Test: blockdev nvme admin passthru ...[2024-07-15 16:25:04.791662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.491 [2024-07-15 16:25:04.791689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.791796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.491 [2024-07-15 16:25:04.791817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:19.491 [2024-07-15 16:25:04.791940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.491 [2024-07-15 16:25:04.791960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:19.491 passed 00:10:19.491 Test: blockdev copy ...passed 00:10:19.491 00:10:19.491 Run Summary: Type Total Ran Passed Failed Inactive 00:10:19.491 suites 1 1 n/a 0 0 00:10:19.491 tests 23 23 23 0 0 00:10:19.491 asserts 152 152 152 0 n/a 00:10:19.491 00:10:19.491 Elapsed time = 0.151 seconds 00:10:19.491 16:25:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.491 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.491 16:25:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.491 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.491 16:25:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:19.491 16:25:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:19.491 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.491 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:19.750 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.750 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:19.750 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.750 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.750 rmmod nvme_tcp 00:10:19.750 rmmod nvme_fabrics 00:10:19.750 rmmod nvme_keyring 00:10:19.750 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.750 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69040 ']' 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69040 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69040 ']' 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69040 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69040 00:10:19.751 killing process with pid 69040 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69040' 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69040 00:10:19.751 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69040 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:20.029 00:10:20.029 real 0m2.693s 00:10:20.029 user 0m8.833s 00:10:20.029 sys 0m0.747s 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.029 ************************************ 00:10:20.029 END TEST nvmf_bdevio 00:10:20.029 ************************************ 00:10:20.029 16:25:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.029 16:25:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:20.029 16:25:05 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:20.029 16:25:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:20.029 16:25:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.029 16:25:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.029 ************************************ 00:10:20.029 START TEST nvmf_auth_target 00:10:20.029 ************************************ 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:20.029 * Looking for test storage... 00:10:20.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.029 16:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.030 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:20.288 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:20.288 Cannot find device "nvmf_tgt_br" 00:10:20.288 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:20.288 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.288 Cannot find device "nvmf_tgt_br2" 00:10:20.288 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:20.289 Cannot find device "nvmf_tgt_br" 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:20.289 Cannot find device "nvmf_tgt_br2" 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:20.289 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:20.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:10:20.548 00:10:20.548 --- 10.0.0.2 ping statistics --- 00:10:20.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.548 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:20.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:20.548 00:10:20.548 --- 10.0.0.3 ping statistics --- 00:10:20.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.548 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:20.548 00:10:20.548 --- 10.0.0.1 ping statistics --- 00:10:20.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.548 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69245 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69245 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69245 ']' 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.548 16:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69277 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=de591fbffd670aee6de34d38241c4a0ef4ce25e0e6a449f8 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.T3K 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key de591fbffd670aee6de34d38241c4a0ef4ce25e0e6a449f8 0 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 de591fbffd670aee6de34d38241c4a0ef4ce25e0e6a449f8 0 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=de591fbffd670aee6de34d38241c4a0ef4ce25e0e6a449f8 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:21.486 16:25:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.T3K 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.T3K 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.T3K 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:21.486 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4498553663f853da7e6f0e2e5a77c10f901c708add1d04e7d6ba9385c6e7ae1f 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pkr 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4498553663f853da7e6f0e2e5a77c10f901c708add1d04e7d6ba9385c6e7ae1f 3 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4498553663f853da7e6f0e2e5a77c10f901c708add1d04e7d6ba9385c6e7ae1f 3 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4498553663f853da7e6f0e2e5a77c10f901c708add1d04e7d6ba9385c6e7ae1f 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pkr 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pkr 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Pkr 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eaf34d8b7ccc7ddd16a577a91b2f54d4 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DHA 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eaf34d8b7ccc7ddd16a577a91b2f54d4 1 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eaf34d8b7ccc7ddd16a577a91b2f54d4 1 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eaf34d8b7ccc7ddd16a577a91b2f54d4 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DHA 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DHA 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.DHA 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42a15fb2799b7f9f9222485ce61cfe2ca74219a34b972301 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uBm 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42a15fb2799b7f9f9222485ce61cfe2ca74219a34b972301 2 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42a15fb2799b7f9f9222485ce61cfe2ca74219a34b972301 2 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42a15fb2799b7f9f9222485ce61cfe2ca74219a34b972301 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uBm 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uBm 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.uBm 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4d417f1f2b4cb3e1e0d2dd41dd9d0d3c526a3970e4e4ef80 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Y8i 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4d417f1f2b4cb3e1e0d2dd41dd9d0d3c526a3970e4e4ef80 2 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4d417f1f2b4cb3e1e0d2dd41dd9d0d3c526a3970e4e4ef80 2 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4d417f1f2b4cb3e1e0d2dd41dd9d0d3c526a3970e4e4ef80 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Y8i 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Y8i 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Y8i 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:21.751 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a12317500dae60e11e8b81864f0d4b3a 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6Vk 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a12317500dae60e11e8b81864f0d4b3a 1 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a12317500dae60e11e8b81864f0d4b3a 1 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a12317500dae60e11e8b81864f0d4b3a 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:21.752 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6Vk 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6Vk 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.6Vk 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be49125e9891e32dce4e83c95879cc4a4babdc7a3f13ae975985d6ead7027f84 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xUR 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be49125e9891e32dce4e83c95879cc4a4babdc7a3f13ae975985d6ead7027f84 3 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be49125e9891e32dce4e83c95879cc4a4babdc7a3f13ae975985d6ead7027f84 3 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be49125e9891e32dce4e83c95879cc4a4babdc7a3f13ae975985d6ead7027f84 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xUR 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xUR 00:10:22.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.xUR 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69245 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69245 ']' 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.019 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.020 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.020 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69277 /var/tmp/host.sock 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69277 ']' 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:22.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.278 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.536 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.536 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:22.536 16:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:22.536 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.536 16:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.T3K 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.T3K 00:10:22.536 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.T3K 00:10:22.794 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Pkr ]] 00:10:22.794 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pkr 00:10:22.794 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.794 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.794 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.794 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pkr 00:10:22.794 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pkr 00:10:23.052 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:23.052 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.DHA 00:10:23.052 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.052 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.052 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.052 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.DHA 00:10:23.052 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.DHA 00:10:23.310 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.uBm ]] 00:10:23.310 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uBm 00:10:23.310 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.310 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.310 16:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.310 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uBm 00:10:23.310 16:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uBm 00:10:23.568 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:23.568 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Y8i 00:10:23.568 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.568 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.568 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.568 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Y8i 00:10:23.568 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Y8i 00:10:23.827 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.6Vk ]] 00:10:23.827 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Vk 00:10:23.827 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.827 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.827 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.827 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Vk 00:10:23.827 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Vk 00:10:24.086 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:24.086 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xUR 00:10:24.086 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.086 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.086 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.086 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xUR 00:10:24.086 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xUR 00:10:24.344 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:24.344 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:24.344 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.345 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:24.345 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:24.345 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.603 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.604 16:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.604 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.604 16:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.862 00:10:24.862 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.862 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.862 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:25.120 { 00:10:25.120 "cntlid": 1, 00:10:25.120 "qid": 0, 00:10:25.120 "state": "enabled", 00:10:25.120 "thread": "nvmf_tgt_poll_group_000", 00:10:25.120 "listen_address": { 00:10:25.120 "trtype": "TCP", 00:10:25.120 "adrfam": "IPv4", 00:10:25.120 "traddr": "10.0.0.2", 00:10:25.120 "trsvcid": "4420" 00:10:25.120 }, 00:10:25.120 "peer_address": { 00:10:25.120 "trtype": "TCP", 00:10:25.120 "adrfam": "IPv4", 00:10:25.120 "traddr": "10.0.0.1", 00:10:25.120 "trsvcid": "56036" 00:10:25.120 }, 00:10:25.120 "auth": { 00:10:25.120 "state": "completed", 00:10:25.120 "digest": "sha256", 00:10:25.120 "dhgroup": "null" 00:10:25.120 } 00:10:25.120 } 00:10:25.120 ]' 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:25.120 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:25.379 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.379 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.379 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.637 16:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.908 16:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.908 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.908 { 00:10:30.908 "cntlid": 3, 00:10:30.908 "qid": 0, 00:10:30.908 "state": "enabled", 00:10:30.908 "thread": "nvmf_tgt_poll_group_000", 00:10:30.908 "listen_address": { 00:10:30.908 "trtype": "TCP", 00:10:30.908 "adrfam": "IPv4", 00:10:30.908 "traddr": "10.0.0.2", 00:10:30.908 "trsvcid": "4420" 00:10:30.908 }, 00:10:30.908 "peer_address": { 00:10:30.908 "trtype": "TCP", 00:10:30.908 "adrfam": "IPv4", 00:10:30.908 "traddr": "10.0.0.1", 00:10:30.908 "trsvcid": "56066" 00:10:30.908 }, 00:10:30.908 "auth": { 00:10:30.908 "state": "completed", 00:10:30.908 "digest": "sha256", 00:10:30.908 "dhgroup": "null" 00:10:30.908 } 00:10:30.908 } 00:10:30.908 ]' 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.908 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.166 16:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:10:32.114 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.115 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.712 00:10:32.712 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.712 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.712 16:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.969 { 00:10:32.969 "cntlid": 5, 00:10:32.969 "qid": 0, 00:10:32.969 "state": "enabled", 00:10:32.969 "thread": "nvmf_tgt_poll_group_000", 00:10:32.969 "listen_address": { 00:10:32.969 "trtype": "TCP", 00:10:32.969 "adrfam": "IPv4", 00:10:32.969 "traddr": "10.0.0.2", 00:10:32.969 "trsvcid": "4420" 00:10:32.969 }, 00:10:32.969 "peer_address": { 00:10:32.969 "trtype": "TCP", 00:10:32.969 "adrfam": "IPv4", 00:10:32.969 "traddr": "10.0.0.1", 00:10:32.969 "trsvcid": "56096" 00:10:32.969 }, 00:10:32.969 "auth": { 00:10:32.969 "state": "completed", 00:10:32.969 "digest": "sha256", 00:10:32.969 "dhgroup": "null" 00:10:32.969 } 00:10:32.969 } 00:10:32.969 ]' 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.969 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.534 16:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.109 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:34.365 16:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:34.931 00:10:34.931 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.931 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.931 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.187 { 00:10:35.187 "cntlid": 7, 00:10:35.187 "qid": 0, 00:10:35.187 "state": "enabled", 00:10:35.187 "thread": "nvmf_tgt_poll_group_000", 00:10:35.187 "listen_address": { 00:10:35.187 "trtype": "TCP", 00:10:35.187 "adrfam": "IPv4", 00:10:35.187 "traddr": "10.0.0.2", 00:10:35.187 "trsvcid": "4420" 00:10:35.187 }, 00:10:35.187 "peer_address": { 00:10:35.187 "trtype": "TCP", 00:10:35.187 "adrfam": "IPv4", 00:10:35.187 "traddr": "10.0.0.1", 00:10:35.187 "trsvcid": "51216" 00:10:35.187 }, 00:10:35.187 "auth": { 00:10:35.187 "state": "completed", 00:10:35.187 "digest": "sha256", 00:10:35.187 "dhgroup": "null" 00:10:35.187 } 00:10:35.187 } 00:10:35.187 ]' 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:35.187 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.188 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.188 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.188 16:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.780 16:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:36.345 16:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.602 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.167 00:10:37.167 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.167 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.167 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.425 { 00:10:37.425 "cntlid": 9, 00:10:37.425 "qid": 0, 00:10:37.425 "state": "enabled", 00:10:37.425 "thread": "nvmf_tgt_poll_group_000", 00:10:37.425 "listen_address": { 00:10:37.425 "trtype": "TCP", 00:10:37.425 "adrfam": "IPv4", 00:10:37.425 "traddr": "10.0.0.2", 00:10:37.425 "trsvcid": "4420" 00:10:37.425 }, 00:10:37.425 "peer_address": { 00:10:37.425 "trtype": "TCP", 00:10:37.425 "adrfam": "IPv4", 00:10:37.425 "traddr": "10.0.0.1", 00:10:37.425 "trsvcid": "51236" 00:10:37.425 }, 00:10:37.425 "auth": { 00:10:37.425 "state": "completed", 00:10:37.425 "digest": "sha256", 00:10:37.425 "dhgroup": "ffdhe2048" 00:10:37.425 } 00:10:37.425 } 00:10:37.425 ]' 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.425 16:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.688 16:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:10:38.260 16:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.260 16:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:38.260 16:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.260 16:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.518 16:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.518 16:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.518 16:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:38.518 16:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.776 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.034 00:10:39.034 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.034 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.034 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.293 { 00:10:39.293 "cntlid": 11, 00:10:39.293 "qid": 0, 00:10:39.293 "state": "enabled", 00:10:39.293 "thread": "nvmf_tgt_poll_group_000", 00:10:39.293 "listen_address": { 00:10:39.293 "trtype": "TCP", 00:10:39.293 "adrfam": "IPv4", 00:10:39.293 "traddr": "10.0.0.2", 00:10:39.293 "trsvcid": "4420" 00:10:39.293 }, 00:10:39.293 "peer_address": { 00:10:39.293 "trtype": "TCP", 00:10:39.293 "adrfam": "IPv4", 00:10:39.293 "traddr": "10.0.0.1", 00:10:39.293 "trsvcid": "51266" 00:10:39.293 }, 00:10:39.293 "auth": { 00:10:39.293 "state": "completed", 00:10:39.293 "digest": "sha256", 00:10:39.293 "dhgroup": "ffdhe2048" 00:10:39.293 } 00:10:39.293 } 00:10:39.293 ]' 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.293 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.551 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:39.551 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.551 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.551 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.551 16:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.810 16:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:10:40.377 16:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.377 16:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:40.377 16:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.377 16:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.636 16:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.636 16:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.636 16:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.636 16:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.893 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.151 00:10:41.151 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.151 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.151 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.408 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.408 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.408 16:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.408 16:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.408 16:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.408 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.408 { 00:10:41.408 "cntlid": 13, 00:10:41.408 "qid": 0, 00:10:41.408 "state": "enabled", 00:10:41.408 "thread": "nvmf_tgt_poll_group_000", 00:10:41.408 "listen_address": { 00:10:41.408 "trtype": "TCP", 00:10:41.408 "adrfam": "IPv4", 00:10:41.408 "traddr": "10.0.0.2", 00:10:41.408 "trsvcid": "4420" 00:10:41.408 }, 00:10:41.408 "peer_address": { 00:10:41.408 "trtype": "TCP", 00:10:41.408 "adrfam": "IPv4", 00:10:41.408 "traddr": "10.0.0.1", 00:10:41.408 "trsvcid": "51298" 00:10:41.408 }, 00:10:41.408 "auth": { 00:10:41.408 "state": "completed", 00:10:41.408 "digest": "sha256", 00:10:41.408 "dhgroup": "ffdhe2048" 00:10:41.408 } 00:10:41.408 } 00:10:41.408 ]' 00:10:41.408 16:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.666 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.666 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.666 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:41.666 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.666 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.666 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.666 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.925 16:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:10:42.492 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.492 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:42.492 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.492 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.751 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.009 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.009 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:43.009 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:43.268 00:10:43.268 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.268 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.268 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.525 { 00:10:43.525 "cntlid": 15, 00:10:43.525 "qid": 0, 00:10:43.525 "state": "enabled", 00:10:43.525 "thread": "nvmf_tgt_poll_group_000", 00:10:43.525 "listen_address": { 00:10:43.525 "trtype": "TCP", 00:10:43.525 "adrfam": "IPv4", 00:10:43.525 "traddr": "10.0.0.2", 00:10:43.525 "trsvcid": "4420" 00:10:43.525 }, 00:10:43.525 "peer_address": { 00:10:43.525 "trtype": "TCP", 00:10:43.525 "adrfam": "IPv4", 00:10:43.525 "traddr": "10.0.0.1", 00:10:43.525 "trsvcid": "45372" 00:10:43.525 }, 00:10:43.525 "auth": { 00:10:43.525 "state": "completed", 00:10:43.525 "digest": "sha256", 00:10:43.525 "dhgroup": "ffdhe2048" 00:10:43.525 } 00:10:43.525 } 00:10:43.525 ]' 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.525 16:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.525 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:43.525 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.525 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.525 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.525 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.091 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:10:44.657 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.657 16:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:44.657 16:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.657 16:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.657 16:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.657 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.657 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.657 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:44.657 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.915 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.222 00:10:45.222 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.222 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.222 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.481 { 00:10:45.481 "cntlid": 17, 00:10:45.481 "qid": 0, 00:10:45.481 "state": "enabled", 00:10:45.481 "thread": "nvmf_tgt_poll_group_000", 00:10:45.481 "listen_address": { 00:10:45.481 "trtype": "TCP", 00:10:45.481 "adrfam": "IPv4", 00:10:45.481 "traddr": "10.0.0.2", 00:10:45.481 "trsvcid": "4420" 00:10:45.481 }, 00:10:45.481 "peer_address": { 00:10:45.481 "trtype": "TCP", 00:10:45.481 "adrfam": "IPv4", 00:10:45.481 "traddr": "10.0.0.1", 00:10:45.481 "trsvcid": "45394" 00:10:45.481 }, 00:10:45.481 "auth": { 00:10:45.481 "state": "completed", 00:10:45.481 "digest": "sha256", 00:10:45.481 "dhgroup": "ffdhe3072" 00:10:45.481 } 00:10:45.481 } 00:10:45.481 ]' 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.481 16:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.481 16:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:45.481 16:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.740 16:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.740 16:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.740 16:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.998 16:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:46.564 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.128 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.386 00:10:47.386 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.386 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.386 16:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.644 { 00:10:47.644 "cntlid": 19, 00:10:47.644 "qid": 0, 00:10:47.644 "state": "enabled", 00:10:47.644 "thread": "nvmf_tgt_poll_group_000", 00:10:47.644 "listen_address": { 00:10:47.644 "trtype": "TCP", 00:10:47.644 "adrfam": "IPv4", 00:10:47.644 "traddr": "10.0.0.2", 00:10:47.644 "trsvcid": "4420" 00:10:47.644 }, 00:10:47.644 "peer_address": { 00:10:47.644 "trtype": "TCP", 00:10:47.644 "adrfam": "IPv4", 00:10:47.644 "traddr": "10.0.0.1", 00:10:47.644 "trsvcid": "45410" 00:10:47.644 }, 00:10:47.644 "auth": { 00:10:47.644 "state": "completed", 00:10:47.644 "digest": "sha256", 00:10:47.644 "dhgroup": "ffdhe3072" 00:10:47.644 } 00:10:47.644 } 00:10:47.644 ]' 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:47.644 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.902 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.902 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.902 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.161 16:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:48.738 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.304 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.562 00:10:49.562 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.562 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.562 16:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.821 { 00:10:49.821 "cntlid": 21, 00:10:49.821 "qid": 0, 00:10:49.821 "state": "enabled", 00:10:49.821 "thread": "nvmf_tgt_poll_group_000", 00:10:49.821 "listen_address": { 00:10:49.821 "trtype": "TCP", 00:10:49.821 "adrfam": "IPv4", 00:10:49.821 "traddr": "10.0.0.2", 00:10:49.821 "trsvcid": "4420" 00:10:49.821 }, 00:10:49.821 "peer_address": { 00:10:49.821 "trtype": "TCP", 00:10:49.821 "adrfam": "IPv4", 00:10:49.821 "traddr": "10.0.0.1", 00:10:49.821 "trsvcid": "45420" 00:10:49.821 }, 00:10:49.821 "auth": { 00:10:49.821 "state": "completed", 00:10:49.821 "digest": "sha256", 00:10:49.821 "dhgroup": "ffdhe3072" 00:10:49.821 } 00:10:49.821 } 00:10:49.821 ]' 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.821 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.079 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:50.079 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.079 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.079 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.079 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.337 16:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:10:50.939 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.939 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:50.940 16:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.940 16:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.197 16:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.197 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.197 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.197 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.454 16:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.713 00:10:51.713 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.713 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.713 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.972 { 00:10:51.972 "cntlid": 23, 00:10:51.972 "qid": 0, 00:10:51.972 "state": "enabled", 00:10:51.972 "thread": "nvmf_tgt_poll_group_000", 00:10:51.972 "listen_address": { 00:10:51.972 "trtype": "TCP", 00:10:51.972 "adrfam": "IPv4", 00:10:51.972 "traddr": "10.0.0.2", 00:10:51.972 "trsvcid": "4420" 00:10:51.972 }, 00:10:51.972 "peer_address": { 00:10:51.972 "trtype": "TCP", 00:10:51.972 "adrfam": "IPv4", 00:10:51.972 "traddr": "10.0.0.1", 00:10:51.972 "trsvcid": "45458" 00:10:51.972 }, 00:10:51.972 "auth": { 00:10:51.972 "state": "completed", 00:10:51.972 "digest": "sha256", 00:10:51.972 "dhgroup": "ffdhe3072" 00:10:51.972 } 00:10:51.972 } 00:10:51.972 ]' 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.972 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.537 16:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:53.104 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.362 16:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.621 00:10:53.621 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.621 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.621 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.880 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.880 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.880 16:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.880 16:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.880 16:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.880 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.880 { 00:10:53.880 "cntlid": 25, 00:10:53.880 "qid": 0, 00:10:53.880 "state": "enabled", 00:10:53.880 "thread": "nvmf_tgt_poll_group_000", 00:10:53.880 "listen_address": { 00:10:53.880 "trtype": "TCP", 00:10:53.880 "adrfam": "IPv4", 00:10:53.880 "traddr": "10.0.0.2", 00:10:53.880 "trsvcid": "4420" 00:10:53.880 }, 00:10:53.880 "peer_address": { 00:10:53.880 "trtype": "TCP", 00:10:53.880 "adrfam": "IPv4", 00:10:53.880 "traddr": "10.0.0.1", 00:10:53.880 "trsvcid": "43698" 00:10:53.880 }, 00:10:53.880 "auth": { 00:10:53.880 "state": "completed", 00:10:53.880 "digest": "sha256", 00:10:53.880 "dhgroup": "ffdhe4096" 00:10:53.880 } 00:10:53.880 } 00:10:53.880 ]' 00:10:53.880 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.139 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.139 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.139 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:54.139 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.139 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.139 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.139 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.399 16:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:10:55.386 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.387 16:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.953 00:10:55.953 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.953 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.953 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.210 { 00:10:56.210 "cntlid": 27, 00:10:56.210 "qid": 0, 00:10:56.210 "state": "enabled", 00:10:56.210 "thread": "nvmf_tgt_poll_group_000", 00:10:56.210 "listen_address": { 00:10:56.210 "trtype": "TCP", 00:10:56.210 "adrfam": "IPv4", 00:10:56.210 "traddr": "10.0.0.2", 00:10:56.210 "trsvcid": "4420" 00:10:56.210 }, 00:10:56.210 "peer_address": { 00:10:56.210 "trtype": "TCP", 00:10:56.210 "adrfam": "IPv4", 00:10:56.210 "traddr": "10.0.0.1", 00:10:56.210 "trsvcid": "43716" 00:10:56.210 }, 00:10:56.210 "auth": { 00:10:56.210 "state": "completed", 00:10:56.210 "digest": "sha256", 00:10:56.210 "dhgroup": "ffdhe4096" 00:10:56.210 } 00:10:56.210 } 00:10:56.210 ]' 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.210 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.468 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:56.468 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.468 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.468 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.468 16:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.726 16:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.659 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.915 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.480 00:10:58.480 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.480 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.480 16:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.738 { 00:10:58.738 "cntlid": 29, 00:10:58.738 "qid": 0, 00:10:58.738 "state": "enabled", 00:10:58.738 "thread": "nvmf_tgt_poll_group_000", 00:10:58.738 "listen_address": { 00:10:58.738 "trtype": "TCP", 00:10:58.738 "adrfam": "IPv4", 00:10:58.738 "traddr": "10.0.0.2", 00:10:58.738 "trsvcid": "4420" 00:10:58.738 }, 00:10:58.738 "peer_address": { 00:10:58.738 "trtype": "TCP", 00:10:58.738 "adrfam": "IPv4", 00:10:58.738 "traddr": "10.0.0.1", 00:10:58.738 "trsvcid": "43746" 00:10:58.738 }, 00:10:58.738 "auth": { 00:10:58.738 "state": "completed", 00:10:58.738 "digest": "sha256", 00:10:58.738 "dhgroup": "ffdhe4096" 00:10:58.738 } 00:10:58.738 } 00:10:58.738 ]' 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.738 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.997 16:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:10:59.564 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.823 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:10:59.823 16:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.823 16:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.823 16:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.823 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.823 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:59.823 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:00.081 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:00.340 00:11:00.340 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.340 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.340 16:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.598 { 00:11:00.598 "cntlid": 31, 00:11:00.598 "qid": 0, 00:11:00.598 "state": "enabled", 00:11:00.598 "thread": "nvmf_tgt_poll_group_000", 00:11:00.598 "listen_address": { 00:11:00.598 "trtype": "TCP", 00:11:00.598 "adrfam": "IPv4", 00:11:00.598 "traddr": "10.0.0.2", 00:11:00.598 "trsvcid": "4420" 00:11:00.598 }, 00:11:00.598 "peer_address": { 00:11:00.598 "trtype": "TCP", 00:11:00.598 "adrfam": "IPv4", 00:11:00.598 "traddr": "10.0.0.1", 00:11:00.598 "trsvcid": "43768" 00:11:00.598 }, 00:11:00.598 "auth": { 00:11:00.598 "state": "completed", 00:11:00.598 "digest": "sha256", 00:11:00.598 "dhgroup": "ffdhe4096" 00:11:00.598 } 00:11:00.598 } 00:11:00.598 ]' 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:00.598 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.855 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.855 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.855 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.114 16:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:01.681 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.940 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.561 00:11:02.561 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.561 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.561 16:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.821 { 00:11:02.821 "cntlid": 33, 00:11:02.821 "qid": 0, 00:11:02.821 "state": "enabled", 00:11:02.821 "thread": "nvmf_tgt_poll_group_000", 00:11:02.821 "listen_address": { 00:11:02.821 "trtype": "TCP", 00:11:02.821 "adrfam": "IPv4", 00:11:02.821 "traddr": "10.0.0.2", 00:11:02.821 "trsvcid": "4420" 00:11:02.821 }, 00:11:02.821 "peer_address": { 00:11:02.821 "trtype": "TCP", 00:11:02.821 "adrfam": "IPv4", 00:11:02.821 "traddr": "10.0.0.1", 00:11:02.821 "trsvcid": "43798" 00:11:02.821 }, 00:11:02.821 "auth": { 00:11:02.821 "state": "completed", 00:11:02.821 "digest": "sha256", 00:11:02.821 "dhgroup": "ffdhe6144" 00:11:02.821 } 00:11:02.821 } 00:11:02.821 ]' 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:02.821 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.079 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.079 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.080 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.339 16:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:03.906 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.165 16:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.732 00:11:04.732 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.732 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.732 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.991 { 00:11:04.991 "cntlid": 35, 00:11:04.991 "qid": 0, 00:11:04.991 "state": "enabled", 00:11:04.991 "thread": "nvmf_tgt_poll_group_000", 00:11:04.991 "listen_address": { 00:11:04.991 "trtype": "TCP", 00:11:04.991 "adrfam": "IPv4", 00:11:04.991 "traddr": "10.0.0.2", 00:11:04.991 "trsvcid": "4420" 00:11:04.991 }, 00:11:04.991 "peer_address": { 00:11:04.991 "trtype": "TCP", 00:11:04.991 "adrfam": "IPv4", 00:11:04.991 "traddr": "10.0.0.1", 00:11:04.991 "trsvcid": "40036" 00:11:04.991 }, 00:11:04.991 "auth": { 00:11:04.991 "state": "completed", 00:11:04.991 "digest": "sha256", 00:11:04.991 "dhgroup": "ffdhe6144" 00:11:04.991 } 00:11:04.991 } 00:11:04.991 ]' 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.991 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.250 16:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:05.818 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.077 16:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.645 00:11:06.645 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.645 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.645 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.904 { 00:11:06.904 "cntlid": 37, 00:11:06.904 "qid": 0, 00:11:06.904 "state": "enabled", 00:11:06.904 "thread": "nvmf_tgt_poll_group_000", 00:11:06.904 "listen_address": { 00:11:06.904 "trtype": "TCP", 00:11:06.904 "adrfam": "IPv4", 00:11:06.904 "traddr": "10.0.0.2", 00:11:06.904 "trsvcid": "4420" 00:11:06.904 }, 00:11:06.904 "peer_address": { 00:11:06.904 "trtype": "TCP", 00:11:06.904 "adrfam": "IPv4", 00:11:06.904 "traddr": "10.0.0.1", 00:11:06.904 "trsvcid": "40066" 00:11:06.904 }, 00:11:06.904 "auth": { 00:11:06.904 "state": "completed", 00:11:06.904 "digest": "sha256", 00:11:06.904 "dhgroup": "ffdhe6144" 00:11:06.904 } 00:11:06.904 } 00:11:06.904 ]' 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.904 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.163 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:07.163 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.163 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.163 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.163 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.421 16:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:07.987 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.246 16:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.247 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:08.247 16:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:08.812 00:11:08.812 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.812 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.812 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.071 { 00:11:09.071 "cntlid": 39, 00:11:09.071 "qid": 0, 00:11:09.071 "state": "enabled", 00:11:09.071 "thread": "nvmf_tgt_poll_group_000", 00:11:09.071 "listen_address": { 00:11:09.071 "trtype": "TCP", 00:11:09.071 "adrfam": "IPv4", 00:11:09.071 "traddr": "10.0.0.2", 00:11:09.071 "trsvcid": "4420" 00:11:09.071 }, 00:11:09.071 "peer_address": { 00:11:09.071 "trtype": "TCP", 00:11:09.071 "adrfam": "IPv4", 00:11:09.071 "traddr": "10.0.0.1", 00:11:09.071 "trsvcid": "40092" 00:11:09.071 }, 00:11:09.071 "auth": { 00:11:09.071 "state": "completed", 00:11:09.071 "digest": "sha256", 00:11:09.071 "dhgroup": "ffdhe6144" 00:11:09.071 } 00:11:09.071 } 00:11:09.071 ]' 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.071 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.328 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:09.328 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.328 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.328 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.328 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.587 16:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:10.153 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.515 16:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.079 00:11:11.079 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.079 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.079 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.337 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.337 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.337 16:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.337 16:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.337 16:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.337 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.337 { 00:11:11.337 "cntlid": 41, 00:11:11.337 "qid": 0, 00:11:11.337 "state": "enabled", 00:11:11.337 "thread": "nvmf_tgt_poll_group_000", 00:11:11.337 "listen_address": { 00:11:11.337 "trtype": "TCP", 00:11:11.337 "adrfam": "IPv4", 00:11:11.337 "traddr": "10.0.0.2", 00:11:11.337 "trsvcid": "4420" 00:11:11.337 }, 00:11:11.337 "peer_address": { 00:11:11.337 "trtype": "TCP", 00:11:11.337 "adrfam": "IPv4", 00:11:11.337 "traddr": "10.0.0.1", 00:11:11.337 "trsvcid": "40122" 00:11:11.337 }, 00:11:11.337 "auth": { 00:11:11.337 "state": "completed", 00:11:11.337 "digest": "sha256", 00:11:11.337 "dhgroup": "ffdhe8192" 00:11:11.337 } 00:11:11.337 } 00:11:11.337 ]' 00:11:11.337 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.595 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.595 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.595 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:11.595 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.595 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.595 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.595 16:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.853 16:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:12.419 16:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.987 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.554 00:11:13.554 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.555 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.555 16:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.813 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.813 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.813 16:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.813 16:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.813 16:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.814 { 00:11:13.814 "cntlid": 43, 00:11:13.814 "qid": 0, 00:11:13.814 "state": "enabled", 00:11:13.814 "thread": "nvmf_tgt_poll_group_000", 00:11:13.814 "listen_address": { 00:11:13.814 "trtype": "TCP", 00:11:13.814 "adrfam": "IPv4", 00:11:13.814 "traddr": "10.0.0.2", 00:11:13.814 "trsvcid": "4420" 00:11:13.814 }, 00:11:13.814 "peer_address": { 00:11:13.814 "trtype": "TCP", 00:11:13.814 "adrfam": "IPv4", 00:11:13.814 "traddr": "10.0.0.1", 00:11:13.814 "trsvcid": "58506" 00:11:13.814 }, 00:11:13.814 "auth": { 00:11:13.814 "state": "completed", 00:11:13.814 "digest": "sha256", 00:11:13.814 "dhgroup": "ffdhe8192" 00:11:13.814 } 00:11:13.814 } 00:11:13.814 ]' 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.814 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.135 16:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.075 16:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.011 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.011 { 00:11:16.011 "cntlid": 45, 00:11:16.011 "qid": 0, 00:11:16.011 "state": "enabled", 00:11:16.011 "thread": "nvmf_tgt_poll_group_000", 00:11:16.011 "listen_address": { 00:11:16.011 "trtype": "TCP", 00:11:16.011 "adrfam": "IPv4", 00:11:16.011 "traddr": "10.0.0.2", 00:11:16.011 "trsvcid": "4420" 00:11:16.011 }, 00:11:16.011 "peer_address": { 00:11:16.011 "trtype": "TCP", 00:11:16.011 "adrfam": "IPv4", 00:11:16.011 "traddr": "10.0.0.1", 00:11:16.011 "trsvcid": "58536" 00:11:16.011 }, 00:11:16.011 "auth": { 00:11:16.011 "state": "completed", 00:11:16.011 "digest": "sha256", 00:11:16.011 "dhgroup": "ffdhe8192" 00:11:16.011 } 00:11:16.011 } 00:11:16.011 ]' 00:11:16.011 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.268 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.268 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.268 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:16.268 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.268 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.268 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.268 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.526 16:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.461 16:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.028 00:11:18.028 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.028 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.028 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.601 { 00:11:18.601 "cntlid": 47, 00:11:18.601 "qid": 0, 00:11:18.601 "state": "enabled", 00:11:18.601 "thread": "nvmf_tgt_poll_group_000", 00:11:18.601 "listen_address": { 00:11:18.601 "trtype": "TCP", 00:11:18.601 "adrfam": "IPv4", 00:11:18.601 "traddr": "10.0.0.2", 00:11:18.601 "trsvcid": "4420" 00:11:18.601 }, 00:11:18.601 "peer_address": { 00:11:18.601 "trtype": "TCP", 00:11:18.601 "adrfam": "IPv4", 00:11:18.601 "traddr": "10.0.0.1", 00:11:18.601 "trsvcid": "58564" 00:11:18.601 }, 00:11:18.601 "auth": { 00:11:18.601 "state": "completed", 00:11:18.601 "digest": "sha256", 00:11:18.601 "dhgroup": "ffdhe8192" 00:11:18.601 } 00:11:18.601 } 00:11:18.601 ]' 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:18.601 16:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.601 16:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.601 16:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.601 16:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.868 16:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:11:19.802 16:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.802 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.060 00:11:20.060 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.060 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.060 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.318 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.318 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.318 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.318 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.318 16:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.318 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.318 { 00:11:20.318 "cntlid": 49, 00:11:20.318 "qid": 0, 00:11:20.318 "state": "enabled", 00:11:20.318 "thread": "nvmf_tgt_poll_group_000", 00:11:20.318 "listen_address": { 00:11:20.318 "trtype": "TCP", 00:11:20.318 "adrfam": "IPv4", 00:11:20.318 "traddr": "10.0.0.2", 00:11:20.318 "trsvcid": "4420" 00:11:20.318 }, 00:11:20.318 "peer_address": { 00:11:20.318 "trtype": "TCP", 00:11:20.318 "adrfam": "IPv4", 00:11:20.318 "traddr": "10.0.0.1", 00:11:20.318 "trsvcid": "58590" 00:11:20.318 }, 00:11:20.318 "auth": { 00:11:20.318 "state": "completed", 00:11:20.318 "digest": "sha384", 00:11:20.318 "dhgroup": "null" 00:11:20.318 } 00:11:20.318 } 00:11:20.318 ]' 00:11:20.318 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.577 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.577 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.577 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:20.577 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.577 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.577 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.577 16:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.835 16:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:21.401 16:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.659 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.225 00:11:22.225 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.225 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.225 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.482 { 00:11:22.482 "cntlid": 51, 00:11:22.482 "qid": 0, 00:11:22.482 "state": "enabled", 00:11:22.482 "thread": "nvmf_tgt_poll_group_000", 00:11:22.482 "listen_address": { 00:11:22.482 "trtype": "TCP", 00:11:22.482 "adrfam": "IPv4", 00:11:22.482 "traddr": "10.0.0.2", 00:11:22.482 "trsvcid": "4420" 00:11:22.482 }, 00:11:22.482 "peer_address": { 00:11:22.482 "trtype": "TCP", 00:11:22.482 "adrfam": "IPv4", 00:11:22.482 "traddr": "10.0.0.1", 00:11:22.482 "trsvcid": "58618" 00:11:22.482 }, 00:11:22.482 "auth": { 00:11:22.482 "state": "completed", 00:11:22.482 "digest": "sha384", 00:11:22.482 "dhgroup": "null" 00:11:22.482 } 00:11:22.482 } 00:11:22.482 ]' 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.482 16:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.739 16:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:11:23.330 16:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.587 16:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:23.587 16:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.587 16:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.587 16:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.587 16:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.587 16:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.587 16:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.587 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.176 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.176 { 00:11:24.176 "cntlid": 53, 00:11:24.176 "qid": 0, 00:11:24.176 "state": "enabled", 00:11:24.176 "thread": "nvmf_tgt_poll_group_000", 00:11:24.176 "listen_address": { 00:11:24.176 "trtype": "TCP", 00:11:24.176 "adrfam": "IPv4", 00:11:24.176 "traddr": "10.0.0.2", 00:11:24.176 "trsvcid": "4420" 00:11:24.176 }, 00:11:24.176 "peer_address": { 00:11:24.176 "trtype": "TCP", 00:11:24.176 "adrfam": "IPv4", 00:11:24.176 "traddr": "10.0.0.1", 00:11:24.176 "trsvcid": "50914" 00:11:24.176 }, 00:11:24.176 "auth": { 00:11:24.176 "state": "completed", 00:11:24.176 "digest": "sha384", 00:11:24.176 "dhgroup": "null" 00:11:24.176 } 00:11:24.176 } 00:11:24.176 ]' 00:11:24.176 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.434 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.434 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.434 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:24.434 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.434 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.434 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.434 16:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.691 16:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:25.256 16:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:25.513 16:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.514 16:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.514 16:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.514 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.514 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.770 00:11:26.041 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.041 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.041 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.314 { 00:11:26.314 "cntlid": 55, 00:11:26.314 "qid": 0, 00:11:26.314 "state": "enabled", 00:11:26.314 "thread": "nvmf_tgt_poll_group_000", 00:11:26.314 "listen_address": { 00:11:26.314 "trtype": "TCP", 00:11:26.314 "adrfam": "IPv4", 00:11:26.314 "traddr": "10.0.0.2", 00:11:26.314 "trsvcid": "4420" 00:11:26.314 }, 00:11:26.314 "peer_address": { 00:11:26.314 "trtype": "TCP", 00:11:26.314 "adrfam": "IPv4", 00:11:26.314 "traddr": "10.0.0.1", 00:11:26.314 "trsvcid": "50930" 00:11:26.314 }, 00:11:26.314 "auth": { 00:11:26.314 "state": "completed", 00:11:26.314 "digest": "sha384", 00:11:26.314 "dhgroup": "null" 00:11:26.314 } 00:11:26.314 } 00:11:26.314 ]' 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.314 16:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.569 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:27.497 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.498 16:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.755 00:11:27.755 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.755 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.755 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.012 { 00:11:28.012 "cntlid": 57, 00:11:28.012 "qid": 0, 00:11:28.012 "state": "enabled", 00:11:28.012 "thread": "nvmf_tgt_poll_group_000", 00:11:28.012 "listen_address": { 00:11:28.012 "trtype": "TCP", 00:11:28.012 "adrfam": "IPv4", 00:11:28.012 "traddr": "10.0.0.2", 00:11:28.012 "trsvcid": "4420" 00:11:28.012 }, 00:11:28.012 "peer_address": { 00:11:28.012 "trtype": "TCP", 00:11:28.012 "adrfam": "IPv4", 00:11:28.012 "traddr": "10.0.0.1", 00:11:28.012 "trsvcid": "50946" 00:11:28.012 }, 00:11:28.012 "auth": { 00:11:28.012 "state": "completed", 00:11:28.012 "digest": "sha384", 00:11:28.012 "dhgroup": "ffdhe2048" 00:11:28.012 } 00:11:28.012 } 00:11:28.012 ]' 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.012 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.270 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:28.270 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.270 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.270 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.270 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.527 16:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.460 16:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.024 00:11:30.024 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.024 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.024 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.282 { 00:11:30.282 "cntlid": 59, 00:11:30.282 "qid": 0, 00:11:30.282 "state": "enabled", 00:11:30.282 "thread": "nvmf_tgt_poll_group_000", 00:11:30.282 "listen_address": { 00:11:30.282 "trtype": "TCP", 00:11:30.282 "adrfam": "IPv4", 00:11:30.282 "traddr": "10.0.0.2", 00:11:30.282 "trsvcid": "4420" 00:11:30.282 }, 00:11:30.282 "peer_address": { 00:11:30.282 "trtype": "TCP", 00:11:30.282 "adrfam": "IPv4", 00:11:30.282 "traddr": "10.0.0.1", 00:11:30.282 "trsvcid": "50986" 00:11:30.282 }, 00:11:30.282 "auth": { 00:11:30.282 "state": "completed", 00:11:30.282 "digest": "sha384", 00:11:30.282 "dhgroup": "ffdhe2048" 00:11:30.282 } 00:11:30.282 } 00:11:30.282 ]' 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.282 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.283 16:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.849 16:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:31.413 16:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.671 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.929 00:11:31.929 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.929 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.929 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.494 { 00:11:32.494 "cntlid": 61, 00:11:32.494 "qid": 0, 00:11:32.494 "state": "enabled", 00:11:32.494 "thread": "nvmf_tgt_poll_group_000", 00:11:32.494 "listen_address": { 00:11:32.494 "trtype": "TCP", 00:11:32.494 "adrfam": "IPv4", 00:11:32.494 "traddr": "10.0.0.2", 00:11:32.494 "trsvcid": "4420" 00:11:32.494 }, 00:11:32.494 "peer_address": { 00:11:32.494 "trtype": "TCP", 00:11:32.494 "adrfam": "IPv4", 00:11:32.494 "traddr": "10.0.0.1", 00:11:32.494 "trsvcid": "51006" 00:11:32.494 }, 00:11:32.494 "auth": { 00:11:32.494 "state": "completed", 00:11:32.494 "digest": "sha384", 00:11:32.494 "dhgroup": "ffdhe2048" 00:11:32.494 } 00:11:32.494 } 00:11:32.494 ]' 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.494 16:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.753 16:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:33.688 16:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.946 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:34.205 00:11:34.205 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.205 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.205 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.463 { 00:11:34.463 "cntlid": 63, 00:11:34.463 "qid": 0, 00:11:34.463 "state": "enabled", 00:11:34.463 "thread": "nvmf_tgt_poll_group_000", 00:11:34.463 "listen_address": { 00:11:34.463 "trtype": "TCP", 00:11:34.463 "adrfam": "IPv4", 00:11:34.463 "traddr": "10.0.0.2", 00:11:34.463 "trsvcid": "4420" 00:11:34.463 }, 00:11:34.463 "peer_address": { 00:11:34.463 "trtype": "TCP", 00:11:34.463 "adrfam": "IPv4", 00:11:34.463 "traddr": "10.0.0.1", 00:11:34.463 "trsvcid": "43358" 00:11:34.463 }, 00:11:34.463 "auth": { 00:11:34.463 "state": "completed", 00:11:34.463 "digest": "sha384", 00:11:34.463 "dhgroup": "ffdhe2048" 00:11:34.463 } 00:11:34.463 } 00:11:34.463 ]' 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.463 16:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.721 16:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:34.721 16:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.721 16:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.721 16:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.721 16:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.980 16:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:35.546 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.112 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.370 00:11:36.370 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.370 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.370 16:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.628 { 00:11:36.628 "cntlid": 65, 00:11:36.628 "qid": 0, 00:11:36.628 "state": "enabled", 00:11:36.628 "thread": "nvmf_tgt_poll_group_000", 00:11:36.628 "listen_address": { 00:11:36.628 "trtype": "TCP", 00:11:36.628 "adrfam": "IPv4", 00:11:36.628 "traddr": "10.0.0.2", 00:11:36.628 "trsvcid": "4420" 00:11:36.628 }, 00:11:36.628 "peer_address": { 00:11:36.628 "trtype": "TCP", 00:11:36.628 "adrfam": "IPv4", 00:11:36.628 "traddr": "10.0.0.1", 00:11:36.628 "trsvcid": "43376" 00:11:36.628 }, 00:11:36.628 "auth": { 00:11:36.628 "state": "completed", 00:11:36.628 "digest": "sha384", 00:11:36.628 "dhgroup": "ffdhe3072" 00:11:36.628 } 00:11:36.628 } 00:11:36.628 ]' 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:36.628 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.887 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.887 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.887 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.145 16:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:37.710 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.968 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.225 00:11:38.225 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.225 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.225 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.482 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.482 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.482 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.482 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.482 16:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.482 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.482 { 00:11:38.482 "cntlid": 67, 00:11:38.482 "qid": 0, 00:11:38.482 "state": "enabled", 00:11:38.482 "thread": "nvmf_tgt_poll_group_000", 00:11:38.482 "listen_address": { 00:11:38.482 "trtype": "TCP", 00:11:38.482 "adrfam": "IPv4", 00:11:38.482 "traddr": "10.0.0.2", 00:11:38.482 "trsvcid": "4420" 00:11:38.482 }, 00:11:38.482 "peer_address": { 00:11:38.482 "trtype": "TCP", 00:11:38.482 "adrfam": "IPv4", 00:11:38.482 "traddr": "10.0.0.1", 00:11:38.482 "trsvcid": "43406" 00:11:38.482 }, 00:11:38.482 "auth": { 00:11:38.482 "state": "completed", 00:11:38.482 "digest": "sha384", 00:11:38.482 "dhgroup": "ffdhe3072" 00:11:38.482 } 00:11:38.482 } 00:11:38.482 ]' 00:11:38.482 16:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.739 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.739 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.739 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:38.739 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.739 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.739 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.739 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.997 16:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.930 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.495 00:11:40.495 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.495 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.495 16:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.752 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.752 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.752 16:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.752 16:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.752 16:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.752 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.752 { 00:11:40.752 "cntlid": 69, 00:11:40.752 "qid": 0, 00:11:40.752 "state": "enabled", 00:11:40.752 "thread": "nvmf_tgt_poll_group_000", 00:11:40.752 "listen_address": { 00:11:40.752 "trtype": "TCP", 00:11:40.752 "adrfam": "IPv4", 00:11:40.752 "traddr": "10.0.0.2", 00:11:40.752 "trsvcid": "4420" 00:11:40.752 }, 00:11:40.752 "peer_address": { 00:11:40.752 "trtype": "TCP", 00:11:40.752 "adrfam": "IPv4", 00:11:40.752 "traddr": "10.0.0.1", 00:11:40.752 "trsvcid": "43434" 00:11:40.752 }, 00:11:40.753 "auth": { 00:11:40.753 "state": "completed", 00:11:40.753 "digest": "sha384", 00:11:40.753 "dhgroup": "ffdhe3072" 00:11:40.753 } 00:11:40.753 } 00:11:40.753 ]' 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.753 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.317 16:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:41.882 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.139 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.732 00:11:42.732 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.732 16:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.732 { 00:11:42.732 "cntlid": 71, 00:11:42.732 "qid": 0, 00:11:42.732 "state": "enabled", 00:11:42.732 "thread": "nvmf_tgt_poll_group_000", 00:11:42.732 "listen_address": { 00:11:42.732 "trtype": "TCP", 00:11:42.732 "adrfam": "IPv4", 00:11:42.732 "traddr": "10.0.0.2", 00:11:42.732 "trsvcid": "4420" 00:11:42.732 }, 00:11:42.732 "peer_address": { 00:11:42.732 "trtype": "TCP", 00:11:42.732 "adrfam": "IPv4", 00:11:42.732 "traddr": "10.0.0.1", 00:11:42.732 "trsvcid": "43470" 00:11:42.732 }, 00:11:42.732 "auth": { 00:11:42.732 "state": "completed", 00:11:42.732 "digest": "sha384", 00:11:42.732 "dhgroup": "ffdhe3072" 00:11:42.732 } 00:11:42.732 } 00:11:42.732 ]' 00:11:42.732 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.990 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.990 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.990 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:42.990 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.990 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.990 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.990 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.248 16:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:44.184 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.443 16:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.702 00:11:44.702 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.702 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.702 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.960 { 00:11:44.960 "cntlid": 73, 00:11:44.960 "qid": 0, 00:11:44.960 "state": "enabled", 00:11:44.960 "thread": "nvmf_tgt_poll_group_000", 00:11:44.960 "listen_address": { 00:11:44.960 "trtype": "TCP", 00:11:44.960 "adrfam": "IPv4", 00:11:44.960 "traddr": "10.0.0.2", 00:11:44.960 "trsvcid": "4420" 00:11:44.960 }, 00:11:44.960 "peer_address": { 00:11:44.960 "trtype": "TCP", 00:11:44.960 "adrfam": "IPv4", 00:11:44.960 "traddr": "10.0.0.1", 00:11:44.960 "trsvcid": "44560" 00:11:44.960 }, 00:11:44.960 "auth": { 00:11:44.960 "state": "completed", 00:11:44.960 "digest": "sha384", 00:11:44.960 "dhgroup": "ffdhe4096" 00:11:44.960 } 00:11:44.960 } 00:11:44.960 ]' 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.960 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.218 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:45.218 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.218 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.218 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.218 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.485 16:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:46.055 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.313 16:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.571 00:11:46.571 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.571 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.571 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.137 { 00:11:47.137 "cntlid": 75, 00:11:47.137 "qid": 0, 00:11:47.137 "state": "enabled", 00:11:47.137 "thread": "nvmf_tgt_poll_group_000", 00:11:47.137 "listen_address": { 00:11:47.137 "trtype": "TCP", 00:11:47.137 "adrfam": "IPv4", 00:11:47.137 "traddr": "10.0.0.2", 00:11:47.137 "trsvcid": "4420" 00:11:47.137 }, 00:11:47.137 "peer_address": { 00:11:47.137 "trtype": "TCP", 00:11:47.137 "adrfam": "IPv4", 00:11:47.137 "traddr": "10.0.0.1", 00:11:47.137 "trsvcid": "44598" 00:11:47.137 }, 00:11:47.137 "auth": { 00:11:47.137 "state": "completed", 00:11:47.137 "digest": "sha384", 00:11:47.137 "dhgroup": "ffdhe4096" 00:11:47.137 } 00:11:47.137 } 00:11:47.137 ]' 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.137 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.411 16:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:47.985 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.243 16:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.809 00:11:48.809 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.809 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.809 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.068 { 00:11:49.068 "cntlid": 77, 00:11:49.068 "qid": 0, 00:11:49.068 "state": "enabled", 00:11:49.068 "thread": "nvmf_tgt_poll_group_000", 00:11:49.068 "listen_address": { 00:11:49.068 "trtype": "TCP", 00:11:49.068 "adrfam": "IPv4", 00:11:49.068 "traddr": "10.0.0.2", 00:11:49.068 "trsvcid": "4420" 00:11:49.068 }, 00:11:49.068 "peer_address": { 00:11:49.068 "trtype": "TCP", 00:11:49.068 "adrfam": "IPv4", 00:11:49.068 "traddr": "10.0.0.1", 00:11:49.068 "trsvcid": "44626" 00:11:49.068 }, 00:11:49.068 "auth": { 00:11:49.068 "state": "completed", 00:11:49.068 "digest": "sha384", 00:11:49.068 "dhgroup": "ffdhe4096" 00:11:49.068 } 00:11:49.068 } 00:11:49.068 ]' 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.068 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.328 16:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.264 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:50.523 16:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:50.782 00:11:50.782 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.782 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.782 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.069 { 00:11:51.069 "cntlid": 79, 00:11:51.069 "qid": 0, 00:11:51.069 "state": "enabled", 00:11:51.069 "thread": "nvmf_tgt_poll_group_000", 00:11:51.069 "listen_address": { 00:11:51.069 "trtype": "TCP", 00:11:51.069 "adrfam": "IPv4", 00:11:51.069 "traddr": "10.0.0.2", 00:11:51.069 "trsvcid": "4420" 00:11:51.069 }, 00:11:51.069 "peer_address": { 00:11:51.069 "trtype": "TCP", 00:11:51.069 "adrfam": "IPv4", 00:11:51.069 "traddr": "10.0.0.1", 00:11:51.069 "trsvcid": "44654" 00:11:51.069 }, 00:11:51.069 "auth": { 00:11:51.069 "state": "completed", 00:11:51.069 "digest": "sha384", 00:11:51.069 "dhgroup": "ffdhe4096" 00:11:51.069 } 00:11:51.069 } 00:11:51.069 ]' 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.069 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.328 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:51.328 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.328 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.328 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.328 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.587 16:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:11:52.154 16:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:52.412 16:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.670 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.951 00:11:53.209 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.209 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.209 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.467 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.467 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.467 16:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.467 16:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.467 16:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.467 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.467 { 00:11:53.467 "cntlid": 81, 00:11:53.467 "qid": 0, 00:11:53.467 "state": "enabled", 00:11:53.467 "thread": "nvmf_tgt_poll_group_000", 00:11:53.467 "listen_address": { 00:11:53.467 "trtype": "TCP", 00:11:53.467 "adrfam": "IPv4", 00:11:53.467 "traddr": "10.0.0.2", 00:11:53.467 "trsvcid": "4420" 00:11:53.467 }, 00:11:53.467 "peer_address": { 00:11:53.467 "trtype": "TCP", 00:11:53.467 "adrfam": "IPv4", 00:11:53.468 "traddr": "10.0.0.1", 00:11:53.468 "trsvcid": "48714" 00:11:53.468 }, 00:11:53.468 "auth": { 00:11:53.468 "state": "completed", 00:11:53.468 "digest": "sha384", 00:11:53.468 "dhgroup": "ffdhe6144" 00:11:53.468 } 00:11:53.468 } 00:11:53.468 ]' 00:11:53.468 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.468 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.468 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.468 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:53.468 16:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.726 16:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.726 16:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.726 16:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.985 16:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:11:54.553 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.554 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:54.554 16:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.554 16:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.554 16:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.554 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.554 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.554 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.815 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.381 00:11:55.381 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.381 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.381 16:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.640 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.640 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.640 16:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.640 16:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.640 16:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.640 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.640 { 00:11:55.640 "cntlid": 83, 00:11:55.640 "qid": 0, 00:11:55.640 "state": "enabled", 00:11:55.640 "thread": "nvmf_tgt_poll_group_000", 00:11:55.640 "listen_address": { 00:11:55.640 "trtype": "TCP", 00:11:55.640 "adrfam": "IPv4", 00:11:55.640 "traddr": "10.0.0.2", 00:11:55.640 "trsvcid": "4420" 00:11:55.640 }, 00:11:55.640 "peer_address": { 00:11:55.640 "trtype": "TCP", 00:11:55.640 "adrfam": "IPv4", 00:11:55.640 "traddr": "10.0.0.1", 00:11:55.640 "trsvcid": "48748" 00:11:55.640 }, 00:11:55.640 "auth": { 00:11:55.640 "state": "completed", 00:11:55.640 "digest": "sha384", 00:11:55.640 "dhgroup": "ffdhe6144" 00:11:55.640 } 00:11:55.640 } 00:11:55.640 ]' 00:11:55.640 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.899 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.899 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.899 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:55.899 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.899 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.900 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.900 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.157 16:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:11:56.723 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.981 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:56.981 16:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.981 16:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.981 16:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.981 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.981 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:56.981 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.239 16:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.804 00:11:57.804 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.804 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.804 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.061 { 00:11:58.061 "cntlid": 85, 00:11:58.061 "qid": 0, 00:11:58.061 "state": "enabled", 00:11:58.061 "thread": "nvmf_tgt_poll_group_000", 00:11:58.061 "listen_address": { 00:11:58.061 "trtype": "TCP", 00:11:58.061 "adrfam": "IPv4", 00:11:58.061 "traddr": "10.0.0.2", 00:11:58.061 "trsvcid": "4420" 00:11:58.061 }, 00:11:58.061 "peer_address": { 00:11:58.061 "trtype": "TCP", 00:11:58.061 "adrfam": "IPv4", 00:11:58.061 "traddr": "10.0.0.1", 00:11:58.061 "trsvcid": "48776" 00:11:58.061 }, 00:11:58.061 "auth": { 00:11:58.061 "state": "completed", 00:11:58.061 "digest": "sha384", 00:11:58.061 "dhgroup": "ffdhe6144" 00:11:58.061 } 00:11:58.061 } 00:11:58.061 ]' 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.061 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.318 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.252 16:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.816 00:11:59.816 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.816 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.816 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.074 { 00:12:00.074 "cntlid": 87, 00:12:00.074 "qid": 0, 00:12:00.074 "state": "enabled", 00:12:00.074 "thread": "nvmf_tgt_poll_group_000", 00:12:00.074 "listen_address": { 00:12:00.074 "trtype": "TCP", 00:12:00.074 "adrfam": "IPv4", 00:12:00.074 "traddr": "10.0.0.2", 00:12:00.074 "trsvcid": "4420" 00:12:00.074 }, 00:12:00.074 "peer_address": { 00:12:00.074 "trtype": "TCP", 00:12:00.074 "adrfam": "IPv4", 00:12:00.074 "traddr": "10.0.0.1", 00:12:00.074 "trsvcid": "48806" 00:12:00.074 }, 00:12:00.074 "auth": { 00:12:00.074 "state": "completed", 00:12:00.074 "digest": "sha384", 00:12:00.074 "dhgroup": "ffdhe6144" 00:12:00.074 } 00:12:00.074 } 00:12:00.074 ]' 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.074 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.333 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:00.333 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.333 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.333 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.333 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.591 16:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.158 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.725 16:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.349 00:12:02.349 16:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.349 16:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.349 16:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.609 16:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.609 16:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.609 16:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.609 16:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.609 16:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.609 16:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.609 { 00:12:02.609 "cntlid": 89, 00:12:02.609 "qid": 0, 00:12:02.609 "state": "enabled", 00:12:02.609 "thread": "nvmf_tgt_poll_group_000", 00:12:02.609 "listen_address": { 00:12:02.609 "trtype": "TCP", 00:12:02.609 "adrfam": "IPv4", 00:12:02.609 "traddr": "10.0.0.2", 00:12:02.609 "trsvcid": "4420" 00:12:02.609 }, 00:12:02.609 "peer_address": { 00:12:02.609 "trtype": "TCP", 00:12:02.609 "adrfam": "IPv4", 00:12:02.609 "traddr": "10.0.0.1", 00:12:02.609 "trsvcid": "48846" 00:12:02.609 }, 00:12:02.609 "auth": { 00:12:02.610 "state": "completed", 00:12:02.610 "digest": "sha384", 00:12:02.610 "dhgroup": "ffdhe8192" 00:12:02.610 } 00:12:02.610 } 00:12:02.610 ]' 00:12:02.610 16:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.610 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.610 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.610 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.610 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.610 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.610 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.610 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.868 16:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.801 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.060 16:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.626 00:12:04.626 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.626 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.626 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.884 { 00:12:04.884 "cntlid": 91, 00:12:04.884 "qid": 0, 00:12:04.884 "state": "enabled", 00:12:04.884 "thread": "nvmf_tgt_poll_group_000", 00:12:04.884 "listen_address": { 00:12:04.884 "trtype": "TCP", 00:12:04.884 "adrfam": "IPv4", 00:12:04.884 "traddr": "10.0.0.2", 00:12:04.884 "trsvcid": "4420" 00:12:04.884 }, 00:12:04.884 "peer_address": { 00:12:04.884 "trtype": "TCP", 00:12:04.884 "adrfam": "IPv4", 00:12:04.884 "traddr": "10.0.0.1", 00:12:04.884 "trsvcid": "39490" 00:12:04.884 }, 00:12:04.884 "auth": { 00:12:04.884 "state": "completed", 00:12:04.884 "digest": "sha384", 00:12:04.884 "dhgroup": "ffdhe8192" 00:12:04.884 } 00:12:04.884 } 00:12:04.884 ]' 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.884 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.143 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.143 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.143 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.401 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:05.971 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.230 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.166 00:12:07.166 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.166 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.166 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.166 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.424 { 00:12:07.424 "cntlid": 93, 00:12:07.424 "qid": 0, 00:12:07.424 "state": "enabled", 00:12:07.424 "thread": "nvmf_tgt_poll_group_000", 00:12:07.424 "listen_address": { 00:12:07.424 "trtype": "TCP", 00:12:07.424 "adrfam": "IPv4", 00:12:07.424 "traddr": "10.0.0.2", 00:12:07.424 "trsvcid": "4420" 00:12:07.424 }, 00:12:07.424 "peer_address": { 00:12:07.424 "trtype": "TCP", 00:12:07.424 "adrfam": "IPv4", 00:12:07.424 "traddr": "10.0.0.1", 00:12:07.424 "trsvcid": "39522" 00:12:07.424 }, 00:12:07.424 "auth": { 00:12:07.424 "state": "completed", 00:12:07.424 "digest": "sha384", 00:12:07.424 "dhgroup": "ffdhe8192" 00:12:07.424 } 00:12:07.424 } 00:12:07.424 ]' 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.424 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.682 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.615 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.872 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:08.872 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.872 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.873 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.440 00:12:09.440 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.440 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.440 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.698 { 00:12:09.698 "cntlid": 95, 00:12:09.698 "qid": 0, 00:12:09.698 "state": "enabled", 00:12:09.698 "thread": "nvmf_tgt_poll_group_000", 00:12:09.698 "listen_address": { 00:12:09.698 "trtype": "TCP", 00:12:09.698 "adrfam": "IPv4", 00:12:09.698 "traddr": "10.0.0.2", 00:12:09.698 "trsvcid": "4420" 00:12:09.698 }, 00:12:09.698 "peer_address": { 00:12:09.698 "trtype": "TCP", 00:12:09.698 "adrfam": "IPv4", 00:12:09.698 "traddr": "10.0.0.1", 00:12:09.698 "trsvcid": "39552" 00:12:09.698 }, 00:12:09.698 "auth": { 00:12:09.698 "state": "completed", 00:12:09.698 "digest": "sha384", 00:12:09.698 "dhgroup": "ffdhe8192" 00:12:09.698 } 00:12:09.698 } 00:12:09.698 ]' 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.698 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.956 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:12:10.522 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:10.781 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.041 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.299 00:12:11.299 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.299 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.299 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.558 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.558 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.558 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.558 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.558 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.558 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.558 { 00:12:11.558 "cntlid": 97, 00:12:11.558 "qid": 0, 00:12:11.558 "state": "enabled", 00:12:11.558 "thread": "nvmf_tgt_poll_group_000", 00:12:11.558 "listen_address": { 00:12:11.558 "trtype": "TCP", 00:12:11.558 "adrfam": "IPv4", 00:12:11.558 "traddr": "10.0.0.2", 00:12:11.558 "trsvcid": "4420" 00:12:11.558 }, 00:12:11.558 "peer_address": { 00:12:11.558 "trtype": "TCP", 00:12:11.558 "adrfam": "IPv4", 00:12:11.558 "traddr": "10.0.0.1", 00:12:11.558 "trsvcid": "39584" 00:12:11.558 }, 00:12:11.558 "auth": { 00:12:11.558 "state": "completed", 00:12:11.558 "digest": "sha512", 00:12:11.558 "dhgroup": "null" 00:12:11.558 } 00:12:11.558 } 00:12:11.558 ]' 00:12:11.558 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.558 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.558 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.558 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:11.558 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.816 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.816 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.816 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.072 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:12.635 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.893 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.151 00:12:13.151 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.151 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.151 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.468 { 00:12:13.468 "cntlid": 99, 00:12:13.468 "qid": 0, 00:12:13.468 "state": "enabled", 00:12:13.468 "thread": "nvmf_tgt_poll_group_000", 00:12:13.468 "listen_address": { 00:12:13.468 "trtype": "TCP", 00:12:13.468 "adrfam": "IPv4", 00:12:13.468 "traddr": "10.0.0.2", 00:12:13.468 "trsvcid": "4420" 00:12:13.468 }, 00:12:13.468 "peer_address": { 00:12:13.468 "trtype": "TCP", 00:12:13.468 "adrfam": "IPv4", 00:12:13.468 "traddr": "10.0.0.1", 00:12:13.468 "trsvcid": "43112" 00:12:13.468 }, 00:12:13.468 "auth": { 00:12:13.468 "state": "completed", 00:12:13.468 "digest": "sha512", 00:12:13.468 "dhgroup": "null" 00:12:13.468 } 00:12:13.468 } 00:12:13.468 ]' 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.468 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.740 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:13.740 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.740 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.740 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.740 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.998 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:14.566 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.132 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.391 00:12:15.391 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.391 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.391 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.651 { 00:12:15.651 "cntlid": 101, 00:12:15.651 "qid": 0, 00:12:15.651 "state": "enabled", 00:12:15.651 "thread": "nvmf_tgt_poll_group_000", 00:12:15.651 "listen_address": { 00:12:15.651 "trtype": "TCP", 00:12:15.651 "adrfam": "IPv4", 00:12:15.651 "traddr": "10.0.0.2", 00:12:15.651 "trsvcid": "4420" 00:12:15.651 }, 00:12:15.651 "peer_address": { 00:12:15.651 "trtype": "TCP", 00:12:15.651 "adrfam": "IPv4", 00:12:15.651 "traddr": "10.0.0.1", 00:12:15.651 "trsvcid": "43144" 00:12:15.651 }, 00:12:15.651 "auth": { 00:12:15.651 "state": "completed", 00:12:15.651 "digest": "sha512", 00:12:15.651 "dhgroup": "null" 00:12:15.651 } 00:12:15.651 } 00:12:15.651 ]' 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.651 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.259 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:16.826 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.084 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.342 00:12:17.342 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.342 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.342 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.601 { 00:12:17.601 "cntlid": 103, 00:12:17.601 "qid": 0, 00:12:17.601 "state": "enabled", 00:12:17.601 "thread": "nvmf_tgt_poll_group_000", 00:12:17.601 "listen_address": { 00:12:17.601 "trtype": "TCP", 00:12:17.601 "adrfam": "IPv4", 00:12:17.601 "traddr": "10.0.0.2", 00:12:17.601 "trsvcid": "4420" 00:12:17.601 }, 00:12:17.601 "peer_address": { 00:12:17.601 "trtype": "TCP", 00:12:17.601 "adrfam": "IPv4", 00:12:17.601 "traddr": "10.0.0.1", 00:12:17.601 "trsvcid": "43156" 00:12:17.601 }, 00:12:17.601 "auth": { 00:12:17.601 "state": "completed", 00:12:17.601 "digest": "sha512", 00:12:17.601 "dhgroup": "null" 00:12:17.601 } 00:12:17.601 } 00:12:17.601 ]' 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.601 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.872 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:17.872 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.872 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.872 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.872 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.137 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:18.704 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.963 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.221 00:12:19.221 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.221 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.221 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.480 { 00:12:19.480 "cntlid": 105, 00:12:19.480 "qid": 0, 00:12:19.480 "state": "enabled", 00:12:19.480 "thread": "nvmf_tgt_poll_group_000", 00:12:19.480 "listen_address": { 00:12:19.480 "trtype": "TCP", 00:12:19.480 "adrfam": "IPv4", 00:12:19.480 "traddr": "10.0.0.2", 00:12:19.480 "trsvcid": "4420" 00:12:19.480 }, 00:12:19.480 "peer_address": { 00:12:19.480 "trtype": "TCP", 00:12:19.480 "adrfam": "IPv4", 00:12:19.480 "traddr": "10.0.0.1", 00:12:19.480 "trsvcid": "43194" 00:12:19.480 }, 00:12:19.480 "auth": { 00:12:19.480 "state": "completed", 00:12:19.480 "digest": "sha512", 00:12:19.480 "dhgroup": "ffdhe2048" 00:12:19.480 } 00:12:19.480 } 00:12:19.480 ]' 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.480 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.739 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.739 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.739 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.997 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:20.564 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.132 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.391 00:12:21.391 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.391 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.391 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.649 { 00:12:21.649 "cntlid": 107, 00:12:21.649 "qid": 0, 00:12:21.649 "state": "enabled", 00:12:21.649 "thread": "nvmf_tgt_poll_group_000", 00:12:21.649 "listen_address": { 00:12:21.649 "trtype": "TCP", 00:12:21.649 "adrfam": "IPv4", 00:12:21.649 "traddr": "10.0.0.2", 00:12:21.649 "trsvcid": "4420" 00:12:21.649 }, 00:12:21.649 "peer_address": { 00:12:21.649 "trtype": "TCP", 00:12:21.649 "adrfam": "IPv4", 00:12:21.649 "traddr": "10.0.0.1", 00:12:21.649 "trsvcid": "43228" 00:12:21.649 }, 00:12:21.649 "auth": { 00:12:21.649 "state": "completed", 00:12:21.649 "digest": "sha512", 00:12:21.649 "dhgroup": "ffdhe2048" 00:12:21.649 } 00:12:21.649 } 00:12:21.649 ]' 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.649 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.908 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:21.908 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.908 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.908 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.908 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.166 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:22.733 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.301 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.559 00:12:23.559 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.559 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.559 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.818 { 00:12:23.818 "cntlid": 109, 00:12:23.818 "qid": 0, 00:12:23.818 "state": "enabled", 00:12:23.818 "thread": "nvmf_tgt_poll_group_000", 00:12:23.818 "listen_address": { 00:12:23.818 "trtype": "TCP", 00:12:23.818 "adrfam": "IPv4", 00:12:23.818 "traddr": "10.0.0.2", 00:12:23.818 "trsvcid": "4420" 00:12:23.818 }, 00:12:23.818 "peer_address": { 00:12:23.818 "trtype": "TCP", 00:12:23.818 "adrfam": "IPv4", 00:12:23.818 "traddr": "10.0.0.1", 00:12:23.818 "trsvcid": "48754" 00:12:23.818 }, 00:12:23.818 "auth": { 00:12:23.818 "state": "completed", 00:12:23.818 "digest": "sha512", 00:12:23.818 "dhgroup": "ffdhe2048" 00:12:23.818 } 00:12:23.818 } 00:12:23.818 ]' 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.818 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.819 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:23.819 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.077 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.077 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.078 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.336 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:24.903 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.162 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.421 00:12:25.680 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.680 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.680 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.939 { 00:12:25.939 "cntlid": 111, 00:12:25.939 "qid": 0, 00:12:25.939 "state": "enabled", 00:12:25.939 "thread": "nvmf_tgt_poll_group_000", 00:12:25.939 "listen_address": { 00:12:25.939 "trtype": "TCP", 00:12:25.939 "adrfam": "IPv4", 00:12:25.939 "traddr": "10.0.0.2", 00:12:25.939 "trsvcid": "4420" 00:12:25.939 }, 00:12:25.939 "peer_address": { 00:12:25.939 "trtype": "TCP", 00:12:25.939 "adrfam": "IPv4", 00:12:25.939 "traddr": "10.0.0.1", 00:12:25.939 "trsvcid": "48772" 00:12:25.939 }, 00:12:25.939 "auth": { 00:12:25.939 "state": "completed", 00:12:25.939 "digest": "sha512", 00:12:25.939 "dhgroup": "ffdhe2048" 00:12:25.939 } 00:12:25.939 } 00:12:25.939 ]' 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.939 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.198 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:12:26.778 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.778 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:26.778 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.778 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.045 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.611 00:12:27.611 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.611 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.611 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.870 { 00:12:27.870 "cntlid": 113, 00:12:27.870 "qid": 0, 00:12:27.870 "state": "enabled", 00:12:27.870 "thread": "nvmf_tgt_poll_group_000", 00:12:27.870 "listen_address": { 00:12:27.870 "trtype": "TCP", 00:12:27.870 "adrfam": "IPv4", 00:12:27.870 "traddr": "10.0.0.2", 00:12:27.870 "trsvcid": "4420" 00:12:27.870 }, 00:12:27.870 "peer_address": { 00:12:27.870 "trtype": "TCP", 00:12:27.870 "adrfam": "IPv4", 00:12:27.870 "traddr": "10.0.0.1", 00:12:27.870 "trsvcid": "48810" 00:12:27.870 }, 00:12:27.870 "auth": { 00:12:27.870 "state": "completed", 00:12:27.870 "digest": "sha512", 00:12:27.870 "dhgroup": "ffdhe3072" 00:12:27.870 } 00:12:27.870 } 00:12:27.870 ]' 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.870 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.129 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:12:29.062 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.062 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:29.062 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.063 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.063 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.063 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.063 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:29.063 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.320 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.577 00:12:29.577 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.577 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.577 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.834 { 00:12:29.834 "cntlid": 115, 00:12:29.834 "qid": 0, 00:12:29.834 "state": "enabled", 00:12:29.834 "thread": "nvmf_tgt_poll_group_000", 00:12:29.834 "listen_address": { 00:12:29.834 "trtype": "TCP", 00:12:29.834 "adrfam": "IPv4", 00:12:29.834 "traddr": "10.0.0.2", 00:12:29.834 "trsvcid": "4420" 00:12:29.834 }, 00:12:29.834 "peer_address": { 00:12:29.834 "trtype": "TCP", 00:12:29.834 "adrfam": "IPv4", 00:12:29.834 "traddr": "10.0.0.1", 00:12:29.834 "trsvcid": "48832" 00:12:29.834 }, 00:12:29.834 "auth": { 00:12:29.834 "state": "completed", 00:12:29.834 "digest": "sha512", 00:12:29.834 "dhgroup": "ffdhe3072" 00:12:29.834 } 00:12:29.834 } 00:12:29.834 ]' 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.834 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.092 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:30.092 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.092 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.092 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.092 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.350 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:30.915 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.173 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.738 00:12:31.738 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.738 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.738 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.995 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.995 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.995 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.995 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.995 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.995 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.995 { 00:12:31.995 "cntlid": 117, 00:12:31.995 "qid": 0, 00:12:31.995 "state": "enabled", 00:12:31.995 "thread": "nvmf_tgt_poll_group_000", 00:12:31.995 "listen_address": { 00:12:31.995 "trtype": "TCP", 00:12:31.995 "adrfam": "IPv4", 00:12:31.995 "traddr": "10.0.0.2", 00:12:31.995 "trsvcid": "4420" 00:12:31.995 }, 00:12:31.995 "peer_address": { 00:12:31.995 "trtype": "TCP", 00:12:31.995 "adrfam": "IPv4", 00:12:31.995 "traddr": "10.0.0.1", 00:12:31.995 "trsvcid": "48864" 00:12:31.995 }, 00:12:31.995 "auth": { 00:12:31.996 "state": "completed", 00:12:31.996 "digest": "sha512", 00:12:31.996 "dhgroup": "ffdhe3072" 00:12:31.996 } 00:12:31.996 } 00:12:31.996 ]' 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.996 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.254 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:12:33.186 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.186 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:33.186 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.186 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.186 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.186 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.187 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:33.187 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:33.444 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.445 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.703 00:12:33.703 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.703 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.703 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.294 { 00:12:34.294 "cntlid": 119, 00:12:34.294 "qid": 0, 00:12:34.294 "state": "enabled", 00:12:34.294 "thread": "nvmf_tgt_poll_group_000", 00:12:34.294 "listen_address": { 00:12:34.294 "trtype": "TCP", 00:12:34.294 "adrfam": "IPv4", 00:12:34.294 "traddr": "10.0.0.2", 00:12:34.294 "trsvcid": "4420" 00:12:34.294 }, 00:12:34.294 "peer_address": { 00:12:34.294 "trtype": "TCP", 00:12:34.294 "adrfam": "IPv4", 00:12:34.294 "traddr": "10.0.0.1", 00:12:34.294 "trsvcid": "58384" 00:12:34.294 }, 00:12:34.294 "auth": { 00:12:34.294 "state": "completed", 00:12:34.294 "digest": "sha512", 00:12:34.294 "dhgroup": "ffdhe3072" 00:12:34.294 } 00:12:34.294 } 00:12:34.294 ]' 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.294 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.552 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.487 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.487 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.487 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.487 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.053 00:12:36.053 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.053 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.053 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.312 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.312 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.312 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.312 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.312 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.312 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.312 { 00:12:36.312 "cntlid": 121, 00:12:36.312 "qid": 0, 00:12:36.313 "state": "enabled", 00:12:36.313 "thread": "nvmf_tgt_poll_group_000", 00:12:36.313 "listen_address": { 00:12:36.313 "trtype": "TCP", 00:12:36.313 "adrfam": "IPv4", 00:12:36.313 "traddr": "10.0.0.2", 00:12:36.313 "trsvcid": "4420" 00:12:36.313 }, 00:12:36.313 "peer_address": { 00:12:36.313 "trtype": "TCP", 00:12:36.313 "adrfam": "IPv4", 00:12:36.313 "traddr": "10.0.0.1", 00:12:36.313 "trsvcid": "58418" 00:12:36.313 }, 00:12:36.313 "auth": { 00:12:36.313 "state": "completed", 00:12:36.313 "digest": "sha512", 00:12:36.313 "dhgroup": "ffdhe4096" 00:12:36.313 } 00:12:36.313 } 00:12:36.313 ]' 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.313 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.571 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:37.139 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.398 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.983 00:12:37.983 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.983 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.983 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.241 { 00:12:38.241 "cntlid": 123, 00:12:38.241 "qid": 0, 00:12:38.241 "state": "enabled", 00:12:38.241 "thread": "nvmf_tgt_poll_group_000", 00:12:38.241 "listen_address": { 00:12:38.241 "trtype": "TCP", 00:12:38.241 "adrfam": "IPv4", 00:12:38.241 "traddr": "10.0.0.2", 00:12:38.241 "trsvcid": "4420" 00:12:38.241 }, 00:12:38.241 "peer_address": { 00:12:38.241 "trtype": "TCP", 00:12:38.241 "adrfam": "IPv4", 00:12:38.241 "traddr": "10.0.0.1", 00:12:38.241 "trsvcid": "58430" 00:12:38.241 }, 00:12:38.241 "auth": { 00:12:38.241 "state": "completed", 00:12:38.241 "digest": "sha512", 00:12:38.241 "dhgroup": "ffdhe4096" 00:12:38.241 } 00:12:38.241 } 00:12:38.241 ]' 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.241 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.500 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:12:39.067 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.067 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:39.067 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.067 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.325 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.894 00:12:39.894 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.894 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.894 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.153 { 00:12:40.153 "cntlid": 125, 00:12:40.153 "qid": 0, 00:12:40.153 "state": "enabled", 00:12:40.153 "thread": "nvmf_tgt_poll_group_000", 00:12:40.153 "listen_address": { 00:12:40.153 "trtype": "TCP", 00:12:40.153 "adrfam": "IPv4", 00:12:40.153 "traddr": "10.0.0.2", 00:12:40.153 "trsvcid": "4420" 00:12:40.153 }, 00:12:40.153 "peer_address": { 00:12:40.153 "trtype": "TCP", 00:12:40.153 "adrfam": "IPv4", 00:12:40.153 "traddr": "10.0.0.1", 00:12:40.153 "trsvcid": "58454" 00:12:40.153 }, 00:12:40.153 "auth": { 00:12:40.153 "state": "completed", 00:12:40.153 "digest": "sha512", 00:12:40.153 "dhgroup": "ffdhe4096" 00:12:40.153 } 00:12:40.153 } 00:12:40.153 ]' 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:40.153 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.411 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.411 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.411 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.670 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:41.237 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.496 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.063 00:12:42.063 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.063 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.063 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.322 { 00:12:42.322 "cntlid": 127, 00:12:42.322 "qid": 0, 00:12:42.322 "state": "enabled", 00:12:42.322 "thread": "nvmf_tgt_poll_group_000", 00:12:42.322 "listen_address": { 00:12:42.322 "trtype": "TCP", 00:12:42.322 "adrfam": "IPv4", 00:12:42.322 "traddr": "10.0.0.2", 00:12:42.322 "trsvcid": "4420" 00:12:42.322 }, 00:12:42.322 "peer_address": { 00:12:42.322 "trtype": "TCP", 00:12:42.322 "adrfam": "IPv4", 00:12:42.322 "traddr": "10.0.0.1", 00:12:42.322 "trsvcid": "58488" 00:12:42.322 }, 00:12:42.322 "auth": { 00:12:42.322 "state": "completed", 00:12:42.322 "digest": "sha512", 00:12:42.322 "dhgroup": "ffdhe4096" 00:12:42.322 } 00:12:42.322 } 00:12:42.322 ]' 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.322 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.323 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:42.323 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.323 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.323 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.323 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.581 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:12:43.576 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:43.577 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.577 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.142 00:12:44.142 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.142 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.142 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.400 { 00:12:44.400 "cntlid": 129, 00:12:44.400 "qid": 0, 00:12:44.400 "state": "enabled", 00:12:44.400 "thread": "nvmf_tgt_poll_group_000", 00:12:44.400 "listen_address": { 00:12:44.400 "trtype": "TCP", 00:12:44.400 "adrfam": "IPv4", 00:12:44.400 "traddr": "10.0.0.2", 00:12:44.400 "trsvcid": "4420" 00:12:44.400 }, 00:12:44.400 "peer_address": { 00:12:44.400 "trtype": "TCP", 00:12:44.400 "adrfam": "IPv4", 00:12:44.400 "traddr": "10.0.0.1", 00:12:44.400 "trsvcid": "56818" 00:12:44.400 }, 00:12:44.400 "auth": { 00:12:44.400 "state": "completed", 00:12:44.400 "digest": "sha512", 00:12:44.400 "dhgroup": "ffdhe6144" 00:12:44.400 } 00:12:44.400 } 00:12:44.400 ]' 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.400 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.657 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.657 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.657 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.657 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.657 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.915 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:45.481 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.739 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.305 00:12:46.305 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.305 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.305 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.562 { 00:12:46.562 "cntlid": 131, 00:12:46.562 "qid": 0, 00:12:46.562 "state": "enabled", 00:12:46.562 "thread": "nvmf_tgt_poll_group_000", 00:12:46.562 "listen_address": { 00:12:46.562 "trtype": "TCP", 00:12:46.562 "adrfam": "IPv4", 00:12:46.562 "traddr": "10.0.0.2", 00:12:46.562 "trsvcid": "4420" 00:12:46.562 }, 00:12:46.562 "peer_address": { 00:12:46.562 "trtype": "TCP", 00:12:46.562 "adrfam": "IPv4", 00:12:46.562 "traddr": "10.0.0.1", 00:12:46.562 "trsvcid": "56842" 00:12:46.562 }, 00:12:46.562 "auth": { 00:12:46.562 "state": "completed", 00:12:46.562 "digest": "sha512", 00:12:46.562 "dhgroup": "ffdhe6144" 00:12:46.562 } 00:12:46.562 } 00:12:46.562 ]' 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.562 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.819 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:46.819 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.819 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.819 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.819 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.076 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.007 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.570 00:12:48.570 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.570 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.570 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.826 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.826 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.827 { 00:12:48.827 "cntlid": 133, 00:12:48.827 "qid": 0, 00:12:48.827 "state": "enabled", 00:12:48.827 "thread": "nvmf_tgt_poll_group_000", 00:12:48.827 "listen_address": { 00:12:48.827 "trtype": "TCP", 00:12:48.827 "adrfam": "IPv4", 00:12:48.827 "traddr": "10.0.0.2", 00:12:48.827 "trsvcid": "4420" 00:12:48.827 }, 00:12:48.827 "peer_address": { 00:12:48.827 "trtype": "TCP", 00:12:48.827 "adrfam": "IPv4", 00:12:48.827 "traddr": "10.0.0.1", 00:12:48.827 "trsvcid": "56858" 00:12:48.827 }, 00:12:48.827 "auth": { 00:12:48.827 "state": "completed", 00:12:48.827 "digest": "sha512", 00:12:48.827 "dhgroup": "ffdhe6144" 00:12:48.827 } 00:12:48.827 } 00:12:48.827 ]' 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.827 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.391 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:49.957 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:50.215 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:50.215 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.215 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.215 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.216 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.783 00:12:50.783 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.783 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.783 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.042 { 00:12:51.042 "cntlid": 135, 00:12:51.042 "qid": 0, 00:12:51.042 "state": "enabled", 00:12:51.042 "thread": "nvmf_tgt_poll_group_000", 00:12:51.042 "listen_address": { 00:12:51.042 "trtype": "TCP", 00:12:51.042 "adrfam": "IPv4", 00:12:51.042 "traddr": "10.0.0.2", 00:12:51.042 "trsvcid": "4420" 00:12:51.042 }, 00:12:51.042 "peer_address": { 00:12:51.042 "trtype": "TCP", 00:12:51.042 "adrfam": "IPv4", 00:12:51.042 "traddr": "10.0.0.1", 00:12:51.042 "trsvcid": "56878" 00:12:51.042 }, 00:12:51.042 "auth": { 00:12:51.042 "state": "completed", 00:12:51.042 "digest": "sha512", 00:12:51.042 "dhgroup": "ffdhe6144" 00:12:51.042 } 00:12:51.042 } 00:12:51.042 ]' 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.042 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.302 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.237 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.495 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.495 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.495 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.061 00:12:53.061 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.061 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.061 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.319 { 00:12:53.319 "cntlid": 137, 00:12:53.319 "qid": 0, 00:12:53.319 "state": "enabled", 00:12:53.319 "thread": "nvmf_tgt_poll_group_000", 00:12:53.319 "listen_address": { 00:12:53.319 "trtype": "TCP", 00:12:53.319 "adrfam": "IPv4", 00:12:53.319 "traddr": "10.0.0.2", 00:12:53.319 "trsvcid": "4420" 00:12:53.319 }, 00:12:53.319 "peer_address": { 00:12:53.319 "trtype": "TCP", 00:12:53.319 "adrfam": "IPv4", 00:12:53.319 "traddr": "10.0.0.1", 00:12:53.319 "trsvcid": "58034" 00:12:53.319 }, 00:12:53.319 "auth": { 00:12:53.319 "state": "completed", 00:12:53.319 "digest": "sha512", 00:12:53.319 "dhgroup": "ffdhe8192" 00:12:53.319 } 00:12:53.319 } 00:12:53.319 ]' 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.319 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.577 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:54.511 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.769 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.335 00:12:55.335 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.335 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.335 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.593 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.593 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.593 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.593 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.593 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.593 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.593 { 00:12:55.593 "cntlid": 139, 00:12:55.593 "qid": 0, 00:12:55.593 "state": "enabled", 00:12:55.593 "thread": "nvmf_tgt_poll_group_000", 00:12:55.593 "listen_address": { 00:12:55.593 "trtype": "TCP", 00:12:55.593 "adrfam": "IPv4", 00:12:55.593 "traddr": "10.0.0.2", 00:12:55.593 "trsvcid": "4420" 00:12:55.593 }, 00:12:55.593 "peer_address": { 00:12:55.594 "trtype": "TCP", 00:12:55.594 "adrfam": "IPv4", 00:12:55.594 "traddr": "10.0.0.1", 00:12:55.594 "trsvcid": "58062" 00:12:55.594 }, 00:12:55.594 "auth": { 00:12:55.594 "state": "completed", 00:12:55.594 "digest": "sha512", 00:12:55.594 "dhgroup": "ffdhe8192" 00:12:55.594 } 00:12:55.594 } 00:12:55.594 ]' 00:12:55.594 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.594 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.594 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.594 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.594 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.865 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.865 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.865 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.136 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:01:ZWFmMzRkOGI3Y2NjN2RkZDE2YTU3N2E5MWIyZjU0ZDQWDlWU: --dhchap-ctrl-secret DHHC-1:02:NDJhMTVmYjI3OTliN2Y5ZjkyMjI0ODVjZTYxY2ZlMmNhNzQyMTlhMzRiOTcyMzAxnRviuA==: 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:56.711 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.969 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.535 00:12:57.535 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.536 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.536 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.794 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.794 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.794 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.794 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.052 { 00:12:58.052 "cntlid": 141, 00:12:58.052 "qid": 0, 00:12:58.052 "state": "enabled", 00:12:58.052 "thread": "nvmf_tgt_poll_group_000", 00:12:58.052 "listen_address": { 00:12:58.052 "trtype": "TCP", 00:12:58.052 "adrfam": "IPv4", 00:12:58.052 "traddr": "10.0.0.2", 00:12:58.052 "trsvcid": "4420" 00:12:58.052 }, 00:12:58.052 "peer_address": { 00:12:58.052 "trtype": "TCP", 00:12:58.052 "adrfam": "IPv4", 00:12:58.052 "traddr": "10.0.0.1", 00:12:58.052 "trsvcid": "58090" 00:12:58.052 }, 00:12:58.052 "auth": { 00:12:58.052 "state": "completed", 00:12:58.052 "digest": "sha512", 00:12:58.052 "dhgroup": "ffdhe8192" 00:12:58.052 } 00:12:58.052 } 00:12:58.052 ]' 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.052 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.312 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:02:NGQ0MTdmMWYyYjRjYjNlMWUwZDJkZDQxZGQ5ZDBkM2M1MjZhMzk3MGU0ZTRlZjgwAJMO5w==: --dhchap-ctrl-secret DHHC-1:01:YTEyMzE3NTAwZGFlNjBlMTFlOGI4MTg2NGYwZDRiM2HgX6D1: 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:58.880 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.449 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.016 00:13:00.016 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.016 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.016 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.275 { 00:13:00.275 "cntlid": 143, 00:13:00.275 "qid": 0, 00:13:00.275 "state": "enabled", 00:13:00.275 "thread": "nvmf_tgt_poll_group_000", 00:13:00.275 "listen_address": { 00:13:00.275 "trtype": "TCP", 00:13:00.275 "adrfam": "IPv4", 00:13:00.275 "traddr": "10.0.0.2", 00:13:00.275 "trsvcid": "4420" 00:13:00.275 }, 00:13:00.275 "peer_address": { 00:13:00.275 "trtype": "TCP", 00:13:00.275 "adrfam": "IPv4", 00:13:00.275 "traddr": "10.0.0.1", 00:13:00.275 "trsvcid": "58112" 00:13:00.275 }, 00:13:00.275 "auth": { 00:13:00.275 "state": "completed", 00:13:00.275 "digest": "sha512", 00:13:00.275 "dhgroup": "ffdhe8192" 00:13:00.275 } 00:13:00.275 } 00:13:00.275 ]' 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:00.275 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.533 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.533 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.533 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.792 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:01.358 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:01.359 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:01.359 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.617 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.551 00:13:02.551 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.551 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.551 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.809 { 00:13:02.809 "cntlid": 145, 00:13:02.809 "qid": 0, 00:13:02.809 "state": "enabled", 00:13:02.809 "thread": "nvmf_tgt_poll_group_000", 00:13:02.809 "listen_address": { 00:13:02.809 "trtype": "TCP", 00:13:02.809 "adrfam": "IPv4", 00:13:02.809 "traddr": "10.0.0.2", 00:13:02.809 "trsvcid": "4420" 00:13:02.809 }, 00:13:02.809 "peer_address": { 00:13:02.809 "trtype": "TCP", 00:13:02.809 "adrfam": "IPv4", 00:13:02.809 "traddr": "10.0.0.1", 00:13:02.809 "trsvcid": "58126" 00:13:02.809 }, 00:13:02.809 "auth": { 00:13:02.809 "state": "completed", 00:13:02.809 "digest": "sha512", 00:13:02.809 "dhgroup": "ffdhe8192" 00:13:02.809 } 00:13:02.809 } 00:13:02.809 ]' 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.809 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.067 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:00:ZGU1OTFmYmZmZDY3MGFlZTZkZTM0ZDM4MjQxYzRhMGVmNGNlMjVlMGU2YTQ0OWY4G3cBdw==: --dhchap-ctrl-secret DHHC-1:03:NDQ5ODU1MzY2M2Y4NTNkYTdlNmYwZTJlNWE3N2MxMGY5MDFjNzA4YWRkMWQwNGU3ZDZiYTkzODVjNmU3YWUxZrxaZd0=: 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:04.002 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:04.566 request: 00:13:04.566 { 00:13:04.566 "name": "nvme0", 00:13:04.566 "trtype": "tcp", 00:13:04.566 "traddr": "10.0.0.2", 00:13:04.566 "adrfam": "ipv4", 00:13:04.566 "trsvcid": "4420", 00:13:04.566 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:04.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc", 00:13:04.566 "prchk_reftag": false, 00:13:04.566 "prchk_guard": false, 00:13:04.566 "hdgst": false, 00:13:04.566 "ddgst": false, 00:13:04.566 "dhchap_key": "key2", 00:13:04.566 "method": "bdev_nvme_attach_controller", 00:13:04.566 "req_id": 1 00:13:04.566 } 00:13:04.566 Got JSON-RPC error response 00:13:04.566 response: 00:13:04.566 { 00:13:04.566 "code": -5, 00:13:04.566 "message": "Input/output error" 00:13:04.566 } 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:04.566 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:05.132 request: 00:13:05.132 { 00:13:05.132 "name": "nvme0", 00:13:05.132 "trtype": "tcp", 00:13:05.132 "traddr": "10.0.0.2", 00:13:05.132 "adrfam": "ipv4", 00:13:05.132 "trsvcid": "4420", 00:13:05.132 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:05.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc", 00:13:05.132 "prchk_reftag": false, 00:13:05.132 "prchk_guard": false, 00:13:05.132 "hdgst": false, 00:13:05.132 "ddgst": false, 00:13:05.132 "dhchap_key": "key1", 00:13:05.132 "dhchap_ctrlr_key": "ckey2", 00:13:05.132 "method": "bdev_nvme_attach_controller", 00:13:05.132 "req_id": 1 00:13:05.132 } 00:13:05.132 Got JSON-RPC error response 00:13:05.132 response: 00:13:05.132 { 00:13:05.132 "code": -5, 00:13:05.132 "message": "Input/output error" 00:13:05.132 } 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key1 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.132 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.699 request: 00:13:05.699 { 00:13:05.699 "name": "nvme0", 00:13:05.699 "trtype": "tcp", 00:13:05.699 "traddr": "10.0.0.2", 00:13:05.699 "adrfam": "ipv4", 00:13:05.699 "trsvcid": "4420", 00:13:05.699 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:05.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc", 00:13:05.699 "prchk_reftag": false, 00:13:05.699 "prchk_guard": false, 00:13:05.699 "hdgst": false, 00:13:05.699 "ddgst": false, 00:13:05.699 "dhchap_key": "key1", 00:13:05.699 "dhchap_ctrlr_key": "ckey1", 00:13:05.699 "method": "bdev_nvme_attach_controller", 00:13:05.699 "req_id": 1 00:13:05.699 } 00:13:05.699 Got JSON-RPC error response 00:13:05.699 response: 00:13:05.699 { 00:13:05.699 "code": -5, 00:13:05.699 "message": "Input/output error" 00:13:05.699 } 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69245 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69245 ']' 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69245 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69245 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:05.699 killing process with pid 69245 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69245' 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69245 00:13:05.699 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69245 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72323 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72323 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72323 ']' 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.957 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.889 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.889 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:06.889 16:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.889 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.889 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72323 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72323 ']' 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.148 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.406 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.407 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.975 00:13:07.975 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.975 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.975 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.234 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.234 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.234 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.234 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.234 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.234 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.234 { 00:13:08.234 "cntlid": 1, 00:13:08.234 "qid": 0, 00:13:08.234 "state": "enabled", 00:13:08.234 "thread": "nvmf_tgt_poll_group_000", 00:13:08.234 "listen_address": { 00:13:08.234 "trtype": "TCP", 00:13:08.234 "adrfam": "IPv4", 00:13:08.234 "traddr": "10.0.0.2", 00:13:08.234 "trsvcid": "4420" 00:13:08.234 }, 00:13:08.234 "peer_address": { 00:13:08.234 "trtype": "TCP", 00:13:08.234 "adrfam": "IPv4", 00:13:08.234 "traddr": "10.0.0.1", 00:13:08.234 "trsvcid": "49146" 00:13:08.234 }, 00:13:08.234 "auth": { 00:13:08.234 "state": "completed", 00:13:08.234 "digest": "sha512", 00:13:08.234 "dhgroup": "ffdhe8192" 00:13:08.234 } 00:13:08.234 } 00:13:08.234 ]' 00:13:08.234 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.492 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.492 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.492 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.492 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.492 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.492 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.492 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.751 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid 6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-secret DHHC-1:03:YmU0OTEyNWU5ODkxZTMyZGNlNGU4M2M5NTg3OWNjNGE0YmFiZGM3YTNmMTNhZTk3NTk4NWQ2ZWFkNzAyN2Y4NOcNb+Y=: 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --dhchap-key key3 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:09.734 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.734 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.016 request: 00:13:10.016 { 00:13:10.016 "name": "nvme0", 00:13:10.016 "trtype": "tcp", 00:13:10.016 "traddr": "10.0.0.2", 00:13:10.016 "adrfam": "ipv4", 00:13:10.016 "trsvcid": "4420", 00:13:10.016 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:10.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc", 00:13:10.016 "prchk_reftag": false, 00:13:10.016 "prchk_guard": false, 00:13:10.016 "hdgst": false, 00:13:10.016 "ddgst": false, 00:13:10.016 "dhchap_key": "key3", 00:13:10.016 "method": "bdev_nvme_attach_controller", 00:13:10.016 "req_id": 1 00:13:10.016 } 00:13:10.016 Got JSON-RPC error response 00:13:10.016 response: 00:13:10.016 { 00:13:10.016 "code": -5, 00:13:10.016 "message": "Input/output error" 00:13:10.016 } 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:10.016 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.583 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.583 request: 00:13:10.583 { 00:13:10.583 "name": "nvme0", 00:13:10.583 "trtype": "tcp", 00:13:10.583 "traddr": "10.0.0.2", 00:13:10.583 "adrfam": "ipv4", 00:13:10.583 "trsvcid": "4420", 00:13:10.583 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:10.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc", 00:13:10.583 "prchk_reftag": false, 00:13:10.583 "prchk_guard": false, 00:13:10.583 "hdgst": false, 00:13:10.583 "ddgst": false, 00:13:10.583 "dhchap_key": "key3", 00:13:10.583 "method": "bdev_nvme_attach_controller", 00:13:10.583 "req_id": 1 00:13:10.583 } 00:13:10.583 Got JSON-RPC error response 00:13:10.583 response: 00:13:10.583 { 00:13:10.583 "code": -5, 00:13:10.583 "message": "Input/output error" 00:13:10.583 } 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:10.583 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:10.841 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:11.099 request: 00:13:11.099 { 00:13:11.099 "name": "nvme0", 00:13:11.099 "trtype": "tcp", 00:13:11.099 "traddr": "10.0.0.2", 00:13:11.099 "adrfam": "ipv4", 00:13:11.100 "trsvcid": "4420", 00:13:11.100 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:11.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc", 00:13:11.100 "prchk_reftag": false, 00:13:11.100 "prchk_guard": false, 00:13:11.100 "hdgst": false, 00:13:11.100 "ddgst": false, 00:13:11.100 "dhchap_key": "key0", 00:13:11.100 "dhchap_ctrlr_key": "key1", 00:13:11.100 "method": "bdev_nvme_attach_controller", 00:13:11.100 "req_id": 1 00:13:11.100 } 00:13:11.100 Got JSON-RPC error response 00:13:11.100 response: 00:13:11.100 { 00:13:11.100 "code": -5, 00:13:11.100 "message": "Input/output error" 00:13:11.100 } 00:13:11.100 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:11.100 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:11.100 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:11.100 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:11.100 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:11.100 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:11.358 00:13:11.358 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:11.358 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.358 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:11.615 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.615 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.615 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69277 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69277 ']' 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69277 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69277 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:12.181 killing process with pid 69277 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69277' 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69277 00:13:12.181 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69277 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.439 rmmod nvme_tcp 00:13:12.439 rmmod nvme_fabrics 00:13:12.439 rmmod nvme_keyring 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72323 ']' 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72323 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72323 ']' 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72323 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.439 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72323 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:12.697 killing process with pid 72323 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72323' 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72323 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72323 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.697 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.956 16:27:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:12.956 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.T3K /tmp/spdk.key-sha256.DHA /tmp/spdk.key-sha384.Y8i /tmp/spdk.key-sha512.xUR /tmp/spdk.key-sha512.Pkr /tmp/spdk.key-sha384.uBm /tmp/spdk.key-sha256.6Vk '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:12.956 00:13:12.956 real 2m52.814s 00:13:12.956 user 6m53.856s 00:13:12.956 sys 0m26.726s 00:13:12.956 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.956 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.956 ************************************ 00:13:12.956 END TEST nvmf_auth_target 00:13:12.956 ************************************ 00:13:12.956 16:27:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:12.956 16:27:58 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:12.956 16:27:58 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:12.956 16:27:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:12.956 16:27:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.956 16:27:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:12.956 ************************************ 00:13:12.956 START TEST nvmf_bdevio_no_huge 00:13:12.956 ************************************ 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:12.956 * Looking for test storage... 00:13:12.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.956 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:12.957 Cannot find device "nvmf_tgt_br" 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.957 Cannot find device "nvmf_tgt_br2" 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:12.957 Cannot find device "nvmf_tgt_br" 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:12.957 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:13.249 Cannot find device "nvmf_tgt_br2" 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:13.249 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:13.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:13:13.250 00:13:13.250 --- 10.0.0.2 ping statistics --- 00:13:13.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.250 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:13.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:13:13.250 00:13:13.250 --- 10.0.0.3 ping statistics --- 00:13:13.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.250 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:13.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:13:13.250 00:13:13.250 --- 10.0.0.1 ping statistics --- 00:13:13.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.250 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.250 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72644 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72644 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72644 ']' 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.507 16:27:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.507 [2024-07-15 16:27:58.878757] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:13.507 [2024-07-15 16:27:58.878852] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:13.508 [2024-07-15 16:27:59.033718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.770 [2024-07-15 16:27:59.153856] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.771 [2024-07-15 16:27:59.154411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.771 [2024-07-15 16:27:59.154771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.771 [2024-07-15 16:27:59.155199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.771 [2024-07-15 16:27:59.155401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.771 [2024-07-15 16:27:59.155809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:13.771 [2024-07-15 16:27:59.155944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:13.771 [2024-07-15 16:27:59.156046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:13.771 [2024-07-15 16:27:59.156055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.771 [2024-07-15 16:27:59.161291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 [2024-07-15 16:27:59.953311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 Malloc0 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 [2024-07-15 16:27:59.994474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:14.705 16:27:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:14.705 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:14.705 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:14.706 { 00:13:14.706 "params": { 00:13:14.706 "name": "Nvme$subsystem", 00:13:14.706 "trtype": "$TEST_TRANSPORT", 00:13:14.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:14.706 "adrfam": "ipv4", 00:13:14.706 "trsvcid": "$NVMF_PORT", 00:13:14.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:14.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:14.706 "hdgst": ${hdgst:-false}, 00:13:14.706 "ddgst": ${ddgst:-false} 00:13:14.706 }, 00:13:14.706 "method": "bdev_nvme_attach_controller" 00:13:14.706 } 00:13:14.706 EOF 00:13:14.706 )") 00:13:14.706 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:14.706 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:14.706 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:14.706 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:14.706 "params": { 00:13:14.706 "name": "Nvme1", 00:13:14.706 "trtype": "tcp", 00:13:14.706 "traddr": "10.0.0.2", 00:13:14.706 "adrfam": "ipv4", 00:13:14.706 "trsvcid": "4420", 00:13:14.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:14.706 "hdgst": false, 00:13:14.706 "ddgst": false 00:13:14.706 }, 00:13:14.706 "method": "bdev_nvme_attach_controller" 00:13:14.706 }' 00:13:14.706 [2024-07-15 16:28:00.058194] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:14.706 [2024-07-15 16:28:00.058321] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72684 ] 00:13:14.706 [2024-07-15 16:28:00.217636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.964 [2024-07-15 16:28:00.362164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.964 [2024-07-15 16:28:00.362307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.964 [2024-07-15 16:28:00.362314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.964 [2024-07-15 16:28:00.376577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:15.223 I/O targets: 00:13:15.223 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:15.223 00:13:15.223 00:13:15.223 CUnit - A unit testing framework for C - Version 2.1-3 00:13:15.223 http://cunit.sourceforge.net/ 00:13:15.223 00:13:15.223 00:13:15.223 Suite: bdevio tests on: Nvme1n1 00:13:15.223 Test: blockdev write read block ...passed 00:13:15.223 Test: blockdev write zeroes read block ...passed 00:13:15.223 Test: blockdev write zeroes read no split ...passed 00:13:15.223 Test: blockdev write zeroes read split ...passed 00:13:15.223 Test: blockdev write zeroes read split partial ...passed 00:13:15.223 Test: blockdev reset ...[2024-07-15 16:28:00.588013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:15.223 [2024-07-15 16:28:00.588132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b3870 (9): Bad file descriptor 00:13:15.223 [2024-07-15 16:28:00.615545] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:15.223 passed 00:13:15.223 Test: blockdev write read 8 blocks ...passed 00:13:15.223 Test: blockdev write read size > 128k ...passed 00:13:15.223 Test: blockdev write read invalid size ...passed 00:13:15.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:15.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:15.223 Test: blockdev write read max offset ...passed 00:13:15.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:15.223 Test: blockdev writev readv 8 blocks ...passed 00:13:15.223 Test: blockdev writev readv 30 x 1block ...passed 00:13:15.223 Test: blockdev writev readv block ...passed 00:13:15.223 Test: blockdev writev readv size > 128k ...passed 00:13:15.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:15.223 Test: blockdev comparev and writev ...[2024-07-15 16:28:00.623762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.623805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.623826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.623837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.624462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.624493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.624512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.624522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.625250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.625280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.625298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.625309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.626656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.626685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.626703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:15.223 [2024-07-15 16:28:00.626713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:15.223 passed 00:13:15.223 Test: blockdev nvme passthru rw ...passed 00:13:15.223 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:28:00.627565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:15.223 [2024-07-15 16:28:00.627595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.627698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:15.223 [2024-07-15 16:28:00.627714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.627817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:15.223 [2024-07-15 16:28:00.627838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:15.223 [2024-07-15 16:28:00.627951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:15.223 [2024-07-15 16:28:00.627978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:15.223 passed 00:13:15.223 Test: blockdev nvme admin passthru ...passed 00:13:15.223 Test: blockdev copy ...passed 00:13:15.223 00:13:15.223 Run Summary: Type Total Ran Passed Failed Inactive 00:13:15.223 suites 1 1 n/a 0 0 00:13:15.223 tests 23 23 23 0 0 00:13:15.223 asserts 152 152 152 0 n/a 00:13:15.223 00:13:15.223 Elapsed time = 0.186 seconds 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.481 16:28:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.739 rmmod nvme_tcp 00:13:15.739 rmmod nvme_fabrics 00:13:15.739 rmmod nvme_keyring 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72644 ']' 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72644 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72644 ']' 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72644 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72644 00:13:15.739 killing process with pid 72644 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72644' 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72644 00:13:15.739 16:28:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72644 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:17.115 00:13:17.115 real 0m4.262s 00:13:17.115 user 0m11.417s 00:13:17.115 sys 0m1.507s 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.115 16:28:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.115 ************************************ 00:13:17.115 END TEST nvmf_bdevio_no_huge 00:13:17.115 ************************************ 00:13:17.115 16:28:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.115 16:28:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:17.115 16:28:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.115 16:28:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.115 16:28:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.115 ************************************ 00:13:17.115 START TEST nvmf_tls 00:13:17.115 ************************************ 00:13:17.115 16:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:17.376 * Looking for test storage... 00:13:17.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:17.376 16:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:17.376 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:17.376 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.376 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.376 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.376 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.376 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:17.377 Cannot find device "nvmf_tgt_br" 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.377 Cannot find device "nvmf_tgt_br2" 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:17.377 Cannot find device "nvmf_tgt_br" 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:17.377 Cannot find device "nvmf_tgt_br2" 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:17.377 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:17.654 16:28:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:17.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:13:17.654 00:13:17.654 --- 10.0.0.2 ping statistics --- 00:13:17.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.654 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:17.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:17.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:17.654 00:13:17.654 --- 10.0.0.3 ping statistics --- 00:13:17.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.654 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:17.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:17.654 00:13:17.654 --- 10.0.0.1 ping statistics --- 00:13:17.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.654 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72874 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72874 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72874 ']' 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.654 16:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.925 [2024-07-15 16:28:03.240245] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:17.925 [2024-07-15 16:28:03.240330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.925 [2024-07-15 16:28:03.380795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.183 [2024-07-15 16:28:03.504741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.183 [2024-07-15 16:28:03.504793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.183 [2024-07-15 16:28:03.504807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.183 [2024-07-15 16:28:03.504817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.183 [2024-07-15 16:28:03.504826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.183 [2024-07-15 16:28:03.504869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.751 16:28:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.751 16:28:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:18.751 16:28:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.751 16:28:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:18.751 16:28:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.009 16:28:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.009 16:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:19.009 16:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:19.267 true 00:13:19.267 16:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:19.267 16:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:19.526 16:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:19.526 16:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:19.526 16:28:04 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:19.784 16:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:19.784 16:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:20.043 16:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:20.043 16:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:20.043 16:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:20.301 16:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:20.301 16:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:20.869 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:20.869 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:20.869 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:20.869 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:20.869 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:20.869 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:20.869 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:21.127 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:21.127 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:21.384 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:21.384 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:21.384 16:28:06 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:21.641 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:21.641 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:21.898 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.BGQrpV4wXT 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.msJsERpKGe 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.BGQrpV4wXT 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.msJsERpKGe 00:13:22.154 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:22.431 16:28:07 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:22.690 [2024-07-15 16:28:08.081719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:22.690 16:28:08 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.BGQrpV4wXT 00:13:22.691 16:28:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BGQrpV4wXT 00:13:22.691 16:28:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:22.949 [2024-07-15 16:28:08.361607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.949 16:28:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:23.207 16:28:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:23.465 [2024-07-15 16:28:08.925788] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:23.465 [2024-07-15 16:28:08.926069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.465 16:28:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:23.722 malloc0 00:13:23.722 16:28:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:23.981 16:28:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BGQrpV4wXT 00:13:24.363 [2024-07-15 16:28:09.677427] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:24.363 16:28:09 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BGQrpV4wXT 00:13:34.350 Initializing NVMe Controllers 00:13:34.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:34.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:34.350 Initialization complete. Launching workers. 00:13:34.350 ======================================================== 00:13:34.350 Latency(us) 00:13:34.350 Device Information : IOPS MiB/s Average min max 00:13:34.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9610.85 37.54 6660.68 1620.44 9270.25 00:13:34.350 ======================================================== 00:13:34.350 Total : 9610.85 37.54 6660.68 1620.44 9270.25 00:13:34.350 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BGQrpV4wXT 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BGQrpV4wXT' 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73110 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73110 /var/tmp/bdevperf.sock 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73110 ']' 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:34.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:34.350 16:28:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.609 [2024-07-15 16:28:19.955377] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:34.609 [2024-07-15 16:28:19.955801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73110 ] 00:13:34.609 [2024-07-15 16:28:20.101990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.868 [2024-07-15 16:28:20.238572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.868 [2024-07-15 16:28:20.299521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:35.435 16:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.435 16:28:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:35.435 16:28:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BGQrpV4wXT 00:13:35.693 [2024-07-15 16:28:21.206545] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.693 [2024-07-15 16:28:21.206684] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:35.953 TLSTESTn1 00:13:35.953 16:28:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:35.953 Running I/O for 10 seconds... 00:13:45.994 00:13:45.994 Latency(us) 00:13:45.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.994 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:45.994 Verification LBA range: start 0x0 length 0x2000 00:13:45.994 TLSTESTn1 : 10.02 3860.24 15.08 0.00 0.00 33083.31 3202.33 22878.02 00:13:45.994 =================================================================================================================== 00:13:45.994 Total : 3860.24 15.08 0.00 0.00 33083.31 3202.33 22878.02 00:13:45.994 0 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73110 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73110 ']' 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73110 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73110 00:13:45.994 killing process with pid 73110 00:13:45.994 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.994 00:13:45.994 Latency(us) 00:13:45.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.994 =================================================================================================================== 00:13:45.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73110' 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73110 00:13:45.994 [2024-07-15 16:28:31.462657] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:45.994 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73110 00:13:46.252 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.msJsERpKGe 00:13:46.252 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:46.252 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.msJsERpKGe 00:13:46.252 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:46.252 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.252 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.msJsERpKGe 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.msJsERpKGe' 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73259 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73259 /var/tmp/bdevperf.sock 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73259 ']' 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.253 16:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.253 [2024-07-15 16:28:31.740330] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:46.253 [2024-07-15 16:28:31.740410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73259 ] 00:13:46.511 [2024-07-15 16:28:31.870295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.511 [2024-07-15 16:28:31.991066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.511 [2024-07-15 16:28:32.047706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:46.830 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.830 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:46.830 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.msJsERpKGe 00:13:47.090 [2024-07-15 16:28:32.411112] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:47.090 [2024-07-15 16:28:32.411249] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:47.090 [2024-07-15 16:28:32.419480] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:47.090 [2024-07-15 16:28:32.419847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c21f0 (107): Transport endpoint is not connected 00:13:47.090 [2024-07-15 16:28:32.420836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c21f0 (9): Bad file descriptor 00:13:47.090 [2024-07-15 16:28:32.421832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:47.090 [2024-07-15 16:28:32.421874] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:47.090 [2024-07-15 16:28:32.421889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:47.090 request: 00:13:47.090 { 00:13:47.090 "name": "TLSTEST", 00:13:47.090 "trtype": "tcp", 00:13:47.090 "traddr": "10.0.0.2", 00:13:47.090 "adrfam": "ipv4", 00:13:47.090 "trsvcid": "4420", 00:13:47.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.090 "prchk_reftag": false, 00:13:47.090 "prchk_guard": false, 00:13:47.090 "hdgst": false, 00:13:47.090 "ddgst": false, 00:13:47.090 "psk": "/tmp/tmp.msJsERpKGe", 00:13:47.090 "method": "bdev_nvme_attach_controller", 00:13:47.090 "req_id": 1 00:13:47.090 } 00:13:47.090 Got JSON-RPC error response 00:13:47.090 response: 00:13:47.090 { 00:13:47.090 "code": -5, 00:13:47.090 "message": "Input/output error" 00:13:47.090 } 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73259 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73259 ']' 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73259 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73259 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:47.090 killing process with pid 73259 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73259' 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73259 00:13:47.090 Received shutdown signal, test time was about 10.000000 seconds 00:13:47.090 00:13:47.090 Latency(us) 00:13:47.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.090 =================================================================================================================== 00:13:47.090 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:47.090 [2024-07-15 16:28:32.473099] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:47.090 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73259 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BGQrpV4wXT 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BGQrpV4wXT 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BGQrpV4wXT 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BGQrpV4wXT' 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73279 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73279 /var/tmp/bdevperf.sock 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73279 ']' 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.349 16:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.349 [2024-07-15 16:28:32.774399] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:47.349 [2024-07-15 16:28:32.774530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73279 ] 00:13:47.608 [2024-07-15 16:28:32.918507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.608 [2024-07-15 16:28:33.042157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.608 [2024-07-15 16:28:33.098115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.593 16:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.593 16:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:48.593 16:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.BGQrpV4wXT 00:13:48.593 [2024-07-15 16:28:34.055147] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:48.593 [2024-07-15 16:28:34.055271] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:48.593 [2024-07-15 16:28:34.060099] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:48.593 [2024-07-15 16:28:34.060141] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:48.593 [2024-07-15 16:28:34.060195] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:48.593 [2024-07-15 16:28:34.060801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f201f0 (107): Transport endpoint is not connected 00:13:48.593 [2024-07-15 16:28:34.061787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f201f0 (9): Bad file descriptor 00:13:48.593 [2024-07-15 16:28:34.062783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:48.593 [2024-07-15 16:28:34.062805] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:48.593 [2024-07-15 16:28:34.062819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:48.593 request: 00:13:48.593 { 00:13:48.593 "name": "TLSTEST", 00:13:48.593 "trtype": "tcp", 00:13:48.593 "traddr": "10.0.0.2", 00:13:48.594 "adrfam": "ipv4", 00:13:48.594 "trsvcid": "4420", 00:13:48.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:48.594 "prchk_reftag": false, 00:13:48.594 "prchk_guard": false, 00:13:48.594 "hdgst": false, 00:13:48.594 "ddgst": false, 00:13:48.594 "psk": "/tmp/tmp.BGQrpV4wXT", 00:13:48.594 "method": "bdev_nvme_attach_controller", 00:13:48.594 "req_id": 1 00:13:48.594 } 00:13:48.594 Got JSON-RPC error response 00:13:48.594 response: 00:13:48.594 { 00:13:48.594 "code": -5, 00:13:48.594 "message": "Input/output error" 00:13:48.594 } 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73279 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73279 ']' 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73279 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73279 00:13:48.594 killing process with pid 73279 00:13:48.594 Received shutdown signal, test time was about 10.000000 seconds 00:13:48.594 00:13:48.594 Latency(us) 00:13:48.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.594 =================================================================================================================== 00:13:48.594 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73279' 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73279 00:13:48.594 [2024-07-15 16:28:34.108457] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:48.594 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73279 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BGQrpV4wXT 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BGQrpV4wXT 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:48.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BGQrpV4wXT 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BGQrpV4wXT' 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73307 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73307 /var/tmp/bdevperf.sock 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73307 ']' 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.866 16:28:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.866 [2024-07-15 16:28:34.376157] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:48.866 [2024-07-15 16:28:34.376236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73307 ] 00:13:49.138 [2024-07-15 16:28:34.510011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.138 [2024-07-15 16:28:34.632108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.406 [2024-07-15 16:28:34.688441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:49.990 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.990 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:49.990 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BGQrpV4wXT 00:13:50.249 [2024-07-15 16:28:35.605061] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:50.249 [2024-07-15 16:28:35.605232] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:50.249 [2024-07-15 16:28:35.610281] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:50.249 [2024-07-15 16:28:35.610333] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:50.249 [2024-07-15 16:28:35.610405] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:50.249 [2024-07-15 16:28:35.610899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16311f0 (107): Transport endpoint is not connected 00:13:50.249 [2024-07-15 16:28:35.611882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16311f0 (9): Bad file descriptor 00:13:50.249 [2024-07-15 16:28:35.612879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:50.249 [2024-07-15 16:28:35.612910] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:50.249 [2024-07-15 16:28:35.612926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:50.249 request: 00:13:50.249 { 00:13:50.249 "name": "TLSTEST", 00:13:50.249 "trtype": "tcp", 00:13:50.249 "traddr": "10.0.0.2", 00:13:50.249 "adrfam": "ipv4", 00:13:50.249 "trsvcid": "4420", 00:13:50.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:50.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:50.249 "prchk_reftag": false, 00:13:50.249 "prchk_guard": false, 00:13:50.249 "hdgst": false, 00:13:50.249 "ddgst": false, 00:13:50.249 "psk": "/tmp/tmp.BGQrpV4wXT", 00:13:50.249 "method": "bdev_nvme_attach_controller", 00:13:50.249 "req_id": 1 00:13:50.249 } 00:13:50.249 Got JSON-RPC error response 00:13:50.249 response: 00:13:50.249 { 00:13:50.249 "code": -5, 00:13:50.249 "message": "Input/output error" 00:13:50.249 } 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73307 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73307 ']' 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73307 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73307 00:13:50.249 killing process with pid 73307 00:13:50.249 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.249 00:13:50.249 Latency(us) 00:13:50.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.249 =================================================================================================================== 00:13:50.249 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73307' 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73307 00:13:50.249 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73307 00:13:50.249 [2024-07-15 16:28:35.661504] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:50.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73334 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73334 /var/tmp/bdevperf.sock 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73334 ']' 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.508 16:28:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.508 [2024-07-15 16:28:36.017795] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:50.508 [2024-07-15 16:28:36.017901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73334 ] 00:13:50.766 [2024-07-15 16:28:36.149735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.766 [2024-07-15 16:28:36.295479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.025 [2024-07-15 16:28:36.369633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.591 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.591 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:51.591 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:51.850 [2024-07-15 16:28:37.240501] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:51.850 [2024-07-15 16:28:37.242413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2207c00 (9): Bad file descriptor 00:13:51.850 [2024-07-15 16:28:37.243404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:51.850 [2024-07-15 16:28:37.243426] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:51.850 [2024-07-15 16:28:37.243441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:51.850 request: 00:13:51.850 { 00:13:51.850 "name": "TLSTEST", 00:13:51.850 "trtype": "tcp", 00:13:51.850 "traddr": "10.0.0.2", 00:13:51.850 "adrfam": "ipv4", 00:13:51.850 "trsvcid": "4420", 00:13:51.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:51.850 "prchk_reftag": false, 00:13:51.850 "prchk_guard": false, 00:13:51.850 "hdgst": false, 00:13:51.850 "ddgst": false, 00:13:51.850 "method": "bdev_nvme_attach_controller", 00:13:51.850 "req_id": 1 00:13:51.850 } 00:13:51.850 Got JSON-RPC error response 00:13:51.850 response: 00:13:51.850 { 00:13:51.850 "code": -5, 00:13:51.850 "message": "Input/output error" 00:13:51.850 } 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73334 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73334 ']' 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73334 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73334 00:13:51.850 killing process with pid 73334 00:13:51.850 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.850 00:13:51.850 Latency(us) 00:13:51.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.850 =================================================================================================================== 00:13:51.850 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73334' 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73334 00:13:51.850 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73334 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72874 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72874 ']' 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72874 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72874 00:13:52.109 killing process with pid 72874 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72874' 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72874 00:13:52.109 [2024-07-15 16:28:37.631340] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:52.109 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72874 00:13:52.367 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:52.367 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:52.367 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:52.367 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:52.367 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:52.367 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:52.367 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Z7ZciVkTZZ 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Z7ZciVkTZZ 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73376 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73376 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73376 ']' 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.626 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.627 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.627 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.627 16:28:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.627 [2024-07-15 16:28:37.995512] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:52.627 [2024-07-15 16:28:37.995616] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.627 [2024-07-15 16:28:38.131569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.886 [2024-07-15 16:28:38.292013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.886 [2024-07-15 16:28:38.292366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.886 [2024-07-15 16:28:38.292551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.886 [2024-07-15 16:28:38.292730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.886 [2024-07-15 16:28:38.292937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.886 [2024-07-15 16:28:38.293013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.886 [2024-07-15 16:28:38.349537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Z7ZciVkTZZ 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z7ZciVkTZZ 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:53.821 [2024-07-15 16:28:39.276219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.821 16:28:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.081 16:28:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:54.339 [2024-07-15 16:28:39.812420] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.339 [2024-07-15 16:28:39.812733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.339 16:28:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:54.598 malloc0 00:13:54.598 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:54.889 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z7ZciVkTZZ 00:13:55.195 [2024-07-15 16:28:40.503196] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z7ZciVkTZZ 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:55.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Z7ZciVkTZZ' 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73432 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73432 /var/tmp/bdevperf.sock 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73432 ']' 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.195 16:28:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.195 [2024-07-15 16:28:40.563274] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:13:55.195 [2024-07-15 16:28:40.563642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73432 ] 00:13:55.195 [2024-07-15 16:28:40.697325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.453 [2024-07-15 16:28:40.812644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.453 [2024-07-15 16:28:40.866648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.020 16:28:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.020 16:28:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:56.020 16:28:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z7ZciVkTZZ 00:13:56.280 [2024-07-15 16:28:41.757511] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:56.280 [2024-07-15 16:28:41.757929] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:56.539 TLSTESTn1 00:13:56.539 16:28:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:56.539 Running I/O for 10 seconds... 00:14:06.526 00:14:06.526 Latency(us) 00:14:06.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.526 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:06.526 Verification LBA range: start 0x0 length 0x2000 00:14:06.526 TLSTESTn1 : 10.02 3978.08 15.54 0.00 0.00 32113.90 6881.28 32887.16 00:14:06.526 =================================================================================================================== 00:14:06.526 Total : 3978.08 15.54 0.00 0.00 32113.90 6881.28 32887.16 00:14:06.526 0 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73432 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73432 ']' 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73432 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73432 00:14:06.526 killing process with pid 73432 00:14:06.526 Received shutdown signal, test time was about 10.000000 seconds 00:14:06.526 00:14:06.526 Latency(us) 00:14:06.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.526 =================================================================================================================== 00:14:06.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73432' 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73432 00:14:06.526 [2024-07-15 16:28:52.037761] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:06.526 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73432 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Z7ZciVkTZZ 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z7ZciVkTZZ 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z7ZciVkTZZ 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z7ZciVkTZZ 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:06.784 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Z7ZciVkTZZ' 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73569 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73569 /var/tmp/bdevperf.sock 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73569 ']' 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.785 16:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.043 [2024-07-15 16:28:52.342655] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:07.043 [2024-07-15 16:28:52.343214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73569 ] 00:14:07.043 [2024-07-15 16:28:52.476324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.043 [2024-07-15 16:28:52.592599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.301 [2024-07-15 16:28:52.647166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:07.867 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.867 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:07.867 16:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z7ZciVkTZZ 00:14:08.126 [2024-07-15 16:28:53.586759] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.126 [2024-07-15 16:28:53.586838] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:08.126 [2024-07-15 16:28:53.586850] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Z7ZciVkTZZ 00:14:08.126 request: 00:14:08.126 { 00:14:08.126 "name": "TLSTEST", 00:14:08.126 "trtype": "tcp", 00:14:08.126 "traddr": "10.0.0.2", 00:14:08.126 "adrfam": "ipv4", 00:14:08.126 "trsvcid": "4420", 00:14:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.126 "prchk_reftag": false, 00:14:08.126 "prchk_guard": false, 00:14:08.126 "hdgst": false, 00:14:08.126 "ddgst": false, 00:14:08.126 "psk": "/tmp/tmp.Z7ZciVkTZZ", 00:14:08.126 "method": "bdev_nvme_attach_controller", 00:14:08.126 "req_id": 1 00:14:08.126 } 00:14:08.126 Got JSON-RPC error response 00:14:08.126 response: 00:14:08.126 { 00:14:08.126 "code": -1, 00:14:08.126 "message": "Operation not permitted" 00:14:08.126 } 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73569 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73569 ']' 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73569 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73569 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:08.126 killing process with pid 73569 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73569' 00:14:08.126 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.126 00:14:08.126 Latency(us) 00:14:08.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.126 =================================================================================================================== 00:14:08.126 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73569 00:14:08.126 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73569 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73376 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73376 ']' 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73376 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73376 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:08.384 killing process with pid 73376 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73376' 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73376 00:14:08.384 [2024-07-15 16:28:53.883532] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:08.384 16:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73376 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73608 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73608 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73608 ']' 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.643 16:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.643 [2024-07-15 16:28:54.188851] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:08.643 [2024-07-15 16:28:54.188999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.902 [2024-07-15 16:28:54.322283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.902 [2024-07-15 16:28:54.438001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.902 [2024-07-15 16:28:54.438052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.902 [2024-07-15 16:28:54.438079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.902 [2024-07-15 16:28:54.438087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.902 [2024-07-15 16:28:54.438094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.902 [2024-07-15 16:28:54.438124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.161 [2024-07-15 16:28:54.493323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Z7ZciVkTZZ 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Z7ZciVkTZZ 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Z7ZciVkTZZ 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z7ZciVkTZZ 00:14:09.728 16:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:10.295 [2024-07-15 16:28:55.542911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.295 16:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:10.295 16:28:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:10.554 [2024-07-15 16:28:56.067025] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.554 [2024-07-15 16:28:56.067296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.554 16:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:11.123 malloc0 00:14:11.123 16:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:11.123 16:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z7ZciVkTZZ 00:14:11.382 [2024-07-15 16:28:56.907082] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:11.382 [2024-07-15 16:28:56.907132] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:11.382 [2024-07-15 16:28:56.907199] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:11.382 request: 00:14:11.382 { 00:14:11.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.382 "host": "nqn.2016-06.io.spdk:host1", 00:14:11.382 "psk": "/tmp/tmp.Z7ZciVkTZZ", 00:14:11.382 "method": "nvmf_subsystem_add_host", 00:14:11.382 "req_id": 1 00:14:11.382 } 00:14:11.382 Got JSON-RPC error response 00:14:11.382 response: 00:14:11.382 { 00:14:11.382 "code": -32603, 00:14:11.382 "message": "Internal error" 00:14:11.382 } 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73608 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73608 ']' 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73608 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.382 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.641 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73608 00:14:11.641 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:11.641 killing process with pid 73608 00:14:11.641 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:11.641 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73608' 00:14:11.641 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73608 00:14:11.641 16:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73608 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Z7ZciVkTZZ 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73676 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73676 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73676 ']' 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.900 16:28:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.900 [2024-07-15 16:28:57.256092] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:11.900 [2024-07-15 16:28:57.256178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.900 [2024-07-15 16:28:57.386653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.159 [2024-07-15 16:28:57.495738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.159 [2024-07-15 16:28:57.495806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.159 [2024-07-15 16:28:57.495832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.159 [2024-07-15 16:28:57.495840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.159 [2024-07-15 16:28:57.495847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.159 [2024-07-15 16:28:57.495886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.159 [2024-07-15 16:28:57.551627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Z7ZciVkTZZ 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z7ZciVkTZZ 00:14:12.727 16:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:12.986 [2024-07-15 16:28:58.462004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.986 16:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.244 16:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:13.504 [2024-07-15 16:28:58.930145] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.504 [2024-07-15 16:28:58.930387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.504 16:28:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:13.763 malloc0 00:14:13.763 16:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.022 16:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z7ZciVkTZZ 00:14:14.281 [2024-07-15 16:28:59.685622] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:14.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73725 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73725 /var/tmp/bdevperf.sock 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73725 ']' 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.281 16:28:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.281 [2024-07-15 16:28:59.755914] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:14.281 [2024-07-15 16:28:59.756040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73725 ] 00:14:14.540 [2024-07-15 16:28:59.897346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.540 [2024-07-15 16:29:00.046797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.850 [2024-07-15 16:29:00.105709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:15.441 16:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.441 16:29:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:15.441 16:29:00 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z7ZciVkTZZ 00:14:15.699 [2024-07-15 16:29:01.076756] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.699 [2024-07-15 16:29:01.076961] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:15.699 TLSTESTn1 00:14:15.699 16:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:15.957 16:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:15.957 "subsystems": [ 00:14:15.957 { 00:14:15.957 "subsystem": "keyring", 00:14:15.957 "config": [] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "iobuf", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "iobuf_set_options", 00:14:15.957 "params": { 00:14:15.957 "small_pool_count": 8192, 00:14:15.957 "large_pool_count": 1024, 00:14:15.957 "small_bufsize": 8192, 00:14:15.957 "large_bufsize": 135168 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "sock", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "sock_set_default_impl", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "uring" 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "sock_impl_set_options", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "ssl", 00:14:15.957 "recv_buf_size": 4096, 00:14:15.957 "send_buf_size": 4096, 00:14:15.957 "enable_recv_pipe": true, 00:14:15.957 "enable_quickack": false, 00:14:15.957 "enable_placement_id": 0, 00:14:15.957 "enable_zerocopy_send_server": true, 00:14:15.957 "enable_zerocopy_send_client": false, 00:14:15.957 "zerocopy_threshold": 0, 00:14:15.957 "tls_version": 0, 00:14:15.957 "enable_ktls": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "sock_impl_set_options", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "posix", 00:14:15.957 "recv_buf_size": 2097152, 00:14:15.957 "send_buf_size": 2097152, 00:14:15.957 "enable_recv_pipe": true, 00:14:15.957 "enable_quickack": false, 00:14:15.957 "enable_placement_id": 0, 00:14:15.957 "enable_zerocopy_send_server": true, 00:14:15.957 "enable_zerocopy_send_client": false, 00:14:15.957 "zerocopy_threshold": 0, 00:14:15.957 "tls_version": 0, 00:14:15.957 "enable_ktls": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "sock_impl_set_options", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "uring", 00:14:15.957 "recv_buf_size": 2097152, 00:14:15.957 "send_buf_size": 2097152, 00:14:15.957 "enable_recv_pipe": true, 00:14:15.957 "enable_quickack": false, 00:14:15.957 "enable_placement_id": 0, 00:14:15.957 "enable_zerocopy_send_server": false, 00:14:15.957 "enable_zerocopy_send_client": false, 00:14:15.957 "zerocopy_threshold": 0, 00:14:15.957 "tls_version": 0, 00:14:15.957 "enable_ktls": false 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "vmd", 00:14:15.957 "config": [] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "accel", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "accel_set_options", 00:14:15.957 "params": { 00:14:15.957 "small_cache_size": 128, 00:14:15.957 "large_cache_size": 16, 00:14:15.957 "task_count": 2048, 00:14:15.957 "sequence_count": 2048, 00:14:15.957 "buf_count": 2048 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "bdev", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "bdev_set_options", 00:14:15.957 "params": { 00:14:15.957 "bdev_io_pool_size": 65535, 00:14:15.957 "bdev_io_cache_size": 256, 00:14:15.957 "bdev_auto_examine": true, 00:14:15.957 "iobuf_small_cache_size": 128, 00:14:15.957 "iobuf_large_cache_size": 16 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_raid_set_options", 00:14:15.957 "params": { 00:14:15.957 "process_window_size_kb": 1024 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_iscsi_set_options", 00:14:15.957 "params": { 00:14:15.957 "timeout_sec": 30 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_nvme_set_options", 00:14:15.957 "params": { 00:14:15.957 "action_on_timeout": "none", 00:14:15.957 "timeout_us": 0, 00:14:15.957 "timeout_admin_us": 0, 00:14:15.957 "keep_alive_timeout_ms": 10000, 00:14:15.957 "arbitration_burst": 0, 00:14:15.957 "low_priority_weight": 0, 00:14:15.957 "medium_priority_weight": 0, 00:14:15.957 "high_priority_weight": 0, 00:14:15.957 "nvme_adminq_poll_period_us": 10000, 00:14:15.957 "nvme_ioq_poll_period_us": 0, 00:14:15.957 "io_queue_requests": 0, 00:14:15.957 "delay_cmd_submit": true, 00:14:15.957 "transport_retry_count": 4, 00:14:15.957 "bdev_retry_count": 3, 00:14:15.957 "transport_ack_timeout": 0, 00:14:15.957 "ctrlr_loss_timeout_sec": 0, 00:14:15.957 "reconnect_delay_sec": 0, 00:14:15.957 "fast_io_fail_timeout_sec": 0, 00:14:15.957 "disable_auto_failback": false, 00:14:15.957 "generate_uuids": false, 00:14:15.957 "transport_tos": 0, 00:14:15.957 "nvme_error_stat": false, 00:14:15.957 "rdma_srq_size": 0, 00:14:15.957 "io_path_stat": false, 00:14:15.957 "allow_accel_sequence": false, 00:14:15.957 "rdma_max_cq_size": 0, 00:14:15.957 "rdma_cm_event_timeout_ms": 0, 00:14:15.957 "dhchap_digests": [ 00:14:15.957 "sha256", 00:14:15.957 "sha384", 00:14:15.957 "sha512" 00:14:15.957 ], 00:14:15.957 "dhchap_dhgroups": [ 00:14:15.957 "null", 00:14:15.957 "ffdhe2048", 00:14:15.957 "ffdhe3072", 00:14:15.957 "ffdhe4096", 00:14:15.957 "ffdhe6144", 00:14:15.957 "ffdhe8192" 00:14:15.957 ] 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_nvme_set_hotplug", 00:14:15.957 "params": { 00:14:15.957 "period_us": 100000, 00:14:15.957 "enable": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_malloc_create", 00:14:15.957 "params": { 00:14:15.957 "name": "malloc0", 00:14:15.957 "num_blocks": 8192, 00:14:15.957 "block_size": 4096, 00:14:15.957 "physical_block_size": 4096, 00:14:15.957 "uuid": "e902b90a-bd43-4eeb-95a0-ed41d3d99740", 00:14:15.957 "optimal_io_boundary": 0 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_wait_for_examine" 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "nbd", 00:14:15.957 "config": [] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "scheduler", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "framework_set_scheduler", 00:14:15.957 "params": { 00:14:15.957 "name": "static" 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "nvmf", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "nvmf_set_config", 00:14:15.957 "params": { 00:14:15.957 "discovery_filter": "match_any", 00:14:15.957 "admin_cmd_passthru": { 00:14:15.957 "identify_ctrlr": false 00:14:15.957 } 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "nvmf_set_max_subsystems", 00:14:15.957 "params": { 00:14:15.957 "max_subsystems": 1024 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "nvmf_set_crdt", 00:14:15.957 "params": { 00:14:15.957 "crdt1": 0, 00:14:15.957 "crdt2": 0, 00:14:15.957 "crdt3": 0 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "nvmf_create_transport", 00:14:15.957 "params": { 00:14:15.957 "trtype": "TCP", 00:14:15.957 "max_queue_depth": 128, 00:14:15.957 "max_io_qpairs_per_ctrlr": 127, 00:14:15.957 "in_capsule_data_size": 4096, 00:14:15.957 "max_io_size": 131072, 00:14:15.957 "io_unit_size": 131072, 00:14:15.957 "max_aq_depth": 128, 00:14:15.957 "num_shared_buffers": 511, 00:14:15.957 "buf_cache_size": 4294967295, 00:14:15.957 "dif_insert_or_strip": false, 00:14:15.957 "zcopy": false, 00:14:15.957 "c2h_success": false, 00:14:15.957 "sock_priority": 0, 00:14:15.957 "abort_timeout_sec": 1, 00:14:15.957 "ack_timeout": 0, 00:14:15.957 "data_wr_pool_size": 0 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "nvmf_create_subsystem", 00:14:15.957 "params": { 00:14:15.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.957 "allow_any_host": false, 00:14:15.957 "serial_number": "SPDK00000000000001", 00:14:15.957 "model_number": "SPDK bdev Controller", 00:14:15.957 "max_namespaces": 10, 00:14:15.957 "min_cntlid": 1, 00:14:15.957 "max_cntlid": 65519, 00:14:15.957 "ana_reporting": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.958 "method": "nvmf_subsystem_add_host", 00:14:15.958 "params": { 00:14:15.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.958 "host": "nqn.2016-06.io.spdk:host1", 00:14:15.958 "psk": "/tmp/tmp.Z7ZciVkTZZ" 00:14:15.958 } 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "method": "nvmf_subsystem_add_ns", 00:14:15.958 "params": { 00:14:15.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.958 "namespace": { 00:14:15.958 "nsid": 1, 00:14:15.958 "bdev_name": "malloc0", 00:14:15.958 "nguid": "E902B90ABD434EEB95A0ED41D3D99740", 00:14:15.958 "uuid": "e902b90a-bd43-4eeb-95a0-ed41d3d99740", 00:14:15.958 "no_auto_visible": false 00:14:15.958 } 00:14:15.958 } 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "method": "nvmf_subsystem_add_listener", 00:14:15.958 "params": { 00:14:15.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.958 "listen_address": { 00:14:15.958 "trtype": "TCP", 00:14:15.958 "adrfam": "IPv4", 00:14:15.958 "traddr": "10.0.0.2", 00:14:15.958 "trsvcid": "4420" 00:14:15.958 }, 00:14:15.958 "secure_channel": true 00:14:15.958 } 00:14:15.958 } 00:14:15.958 ] 00:14:15.958 } 00:14:15.958 ] 00:14:15.958 }' 00:14:15.958 16:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:16.216 16:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:16.216 "subsystems": [ 00:14:16.216 { 00:14:16.216 "subsystem": "keyring", 00:14:16.216 "config": [] 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "subsystem": "iobuf", 00:14:16.216 "config": [ 00:14:16.216 { 00:14:16.216 "method": "iobuf_set_options", 00:14:16.216 "params": { 00:14:16.216 "small_pool_count": 8192, 00:14:16.216 "large_pool_count": 1024, 00:14:16.216 "small_bufsize": 8192, 00:14:16.216 "large_bufsize": 135168 00:14:16.216 } 00:14:16.216 } 00:14:16.216 ] 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "subsystem": "sock", 00:14:16.216 "config": [ 00:14:16.216 { 00:14:16.216 "method": "sock_set_default_impl", 00:14:16.216 "params": { 00:14:16.216 "impl_name": "uring" 00:14:16.216 } 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "method": "sock_impl_set_options", 00:14:16.216 "params": { 00:14:16.216 "impl_name": "ssl", 00:14:16.216 "recv_buf_size": 4096, 00:14:16.216 "send_buf_size": 4096, 00:14:16.216 "enable_recv_pipe": true, 00:14:16.216 "enable_quickack": false, 00:14:16.216 "enable_placement_id": 0, 00:14:16.216 "enable_zerocopy_send_server": true, 00:14:16.216 "enable_zerocopy_send_client": false, 00:14:16.216 "zerocopy_threshold": 0, 00:14:16.216 "tls_version": 0, 00:14:16.216 "enable_ktls": false 00:14:16.216 } 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "method": "sock_impl_set_options", 00:14:16.216 "params": { 00:14:16.216 "impl_name": "posix", 00:14:16.216 "recv_buf_size": 2097152, 00:14:16.216 "send_buf_size": 2097152, 00:14:16.216 "enable_recv_pipe": true, 00:14:16.216 "enable_quickack": false, 00:14:16.216 "enable_placement_id": 0, 00:14:16.216 "enable_zerocopy_send_server": true, 00:14:16.216 "enable_zerocopy_send_client": false, 00:14:16.216 "zerocopy_threshold": 0, 00:14:16.216 "tls_version": 0, 00:14:16.216 "enable_ktls": false 00:14:16.216 } 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "method": "sock_impl_set_options", 00:14:16.216 "params": { 00:14:16.216 "impl_name": "uring", 00:14:16.216 "recv_buf_size": 2097152, 00:14:16.216 "send_buf_size": 2097152, 00:14:16.216 "enable_recv_pipe": true, 00:14:16.216 "enable_quickack": false, 00:14:16.216 "enable_placement_id": 0, 00:14:16.216 "enable_zerocopy_send_server": false, 00:14:16.216 "enable_zerocopy_send_client": false, 00:14:16.216 "zerocopy_threshold": 0, 00:14:16.216 "tls_version": 0, 00:14:16.216 "enable_ktls": false 00:14:16.216 } 00:14:16.216 } 00:14:16.216 ] 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "subsystem": "vmd", 00:14:16.216 "config": [] 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "subsystem": "accel", 00:14:16.216 "config": [ 00:14:16.216 { 00:14:16.216 "method": "accel_set_options", 00:14:16.216 "params": { 00:14:16.216 "small_cache_size": 128, 00:14:16.216 "large_cache_size": 16, 00:14:16.216 "task_count": 2048, 00:14:16.216 "sequence_count": 2048, 00:14:16.216 "buf_count": 2048 00:14:16.216 } 00:14:16.216 } 00:14:16.216 ] 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "subsystem": "bdev", 00:14:16.216 "config": [ 00:14:16.216 { 00:14:16.216 "method": "bdev_set_options", 00:14:16.216 "params": { 00:14:16.216 "bdev_io_pool_size": 65535, 00:14:16.216 "bdev_io_cache_size": 256, 00:14:16.216 "bdev_auto_examine": true, 00:14:16.216 "iobuf_small_cache_size": 128, 00:14:16.216 "iobuf_large_cache_size": 16 00:14:16.216 } 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "method": "bdev_raid_set_options", 00:14:16.216 "params": { 00:14:16.216 "process_window_size_kb": 1024 00:14:16.216 } 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "method": "bdev_iscsi_set_options", 00:14:16.216 "params": { 00:14:16.216 "timeout_sec": 30 00:14:16.216 } 00:14:16.216 }, 00:14:16.216 { 00:14:16.216 "method": "bdev_nvme_set_options", 00:14:16.216 "params": { 00:14:16.216 "action_on_timeout": "none", 00:14:16.216 "timeout_us": 0, 00:14:16.217 "timeout_admin_us": 0, 00:14:16.217 "keep_alive_timeout_ms": 10000, 00:14:16.217 "arbitration_burst": 0, 00:14:16.217 "low_priority_weight": 0, 00:14:16.217 "medium_priority_weight": 0, 00:14:16.217 "high_priority_weight": 0, 00:14:16.217 "nvme_adminq_poll_period_us": 10000, 00:14:16.217 "nvme_ioq_poll_period_us": 0, 00:14:16.217 "io_queue_requests": 512, 00:14:16.217 "delay_cmd_submit": true, 00:14:16.217 "transport_retry_count": 4, 00:14:16.217 "bdev_retry_count": 3, 00:14:16.217 "transport_ack_timeout": 0, 00:14:16.217 "ctrlr_loss_timeout_sec": 0, 00:14:16.217 "reconnect_delay_sec": 0, 00:14:16.217 "fast_io_fail_timeout_sec": 0, 00:14:16.217 "disable_auto_failback": false, 00:14:16.217 "generate_uuids": false, 00:14:16.217 "transport_tos": 0, 00:14:16.217 "nvme_error_stat": false, 00:14:16.217 "rdma_srq_size": 0, 00:14:16.217 "io_path_stat": false, 00:14:16.217 "allow_accel_sequence": false, 00:14:16.217 "rdma_max_cq_size": 0, 00:14:16.217 "rdma_cm_event_timeout_ms": 0, 00:14:16.217 "dhchap_digests": [ 00:14:16.217 "sha256", 00:14:16.217 "sha384", 00:14:16.217 "sha512" 00:14:16.217 ], 00:14:16.217 "dhchap_dhgroups": [ 00:14:16.217 "null", 00:14:16.217 "ffdhe2048", 00:14:16.217 "ffdhe3072", 00:14:16.217 "ffdhe4096", 00:14:16.217 "ffdhe6144", 00:14:16.217 "ffdhe8192" 00:14:16.217 ] 00:14:16.217 } 00:14:16.217 }, 00:14:16.217 { 00:14:16.217 "method": "bdev_nvme_attach_controller", 00:14:16.217 "params": { 00:14:16.217 "name": "TLSTEST", 00:14:16.217 "trtype": "TCP", 00:14:16.217 "adrfam": "IPv4", 00:14:16.217 "traddr": "10.0.0.2", 00:14:16.217 "trsvcid": "4420", 00:14:16.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.217 "prchk_reftag": false, 00:14:16.217 "prchk_guard": false, 00:14:16.217 "ctrlr_loss_timeout_sec": 0, 00:14:16.217 "reconnect_delay_sec": 0, 00:14:16.217 "fast_io_fail_timeout_sec": 0, 00:14:16.217 "psk": "/tmp/tmp.Z7ZciVkTZZ", 00:14:16.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:16.217 "hdgst": false, 00:14:16.217 "ddgst": false 00:14:16.217 } 00:14:16.217 }, 00:14:16.217 { 00:14:16.217 "method": "bdev_nvme_set_hotplug", 00:14:16.217 "params": { 00:14:16.217 "period_us": 100000, 00:14:16.217 "enable": false 00:14:16.217 } 00:14:16.217 }, 00:14:16.217 { 00:14:16.217 "method": "bdev_wait_for_examine" 00:14:16.217 } 00:14:16.217 ] 00:14:16.217 }, 00:14:16.217 { 00:14:16.217 "subsystem": "nbd", 00:14:16.217 "config": [] 00:14:16.217 } 00:14:16.217 ] 00:14:16.217 }' 00:14:16.217 16:29:01 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73725 00:14:16.217 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73725 ']' 00:14:16.217 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73725 00:14:16.217 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:16.475 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:16.475 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73725 00:14:16.475 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:16.475 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:16.475 killing process with pid 73725 00:14:16.475 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73725' 00:14:16.475 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73725 00:14:16.475 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.475 00:14:16.475 Latency(us) 00:14:16.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.475 =================================================================================================================== 00:14:16.475 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:16.475 [2024-07-15 16:29:01.790094] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:16.475 16:29:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73725 00:14:16.475 16:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73676 00:14:16.475 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73676 ']' 00:14:16.475 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73676 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73676 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:16.733 killing process with pid 73676 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73676' 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73676 00:14:16.733 [2024-07-15 16:29:02.048697] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:16.733 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73676 00:14:16.992 16:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:16.992 16:29:02 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:16.992 "subsystems": [ 00:14:16.992 { 00:14:16.992 "subsystem": "keyring", 00:14:16.992 "config": [] 00:14:16.992 }, 00:14:16.992 { 00:14:16.992 "subsystem": "iobuf", 00:14:16.992 "config": [ 00:14:16.992 { 00:14:16.992 "method": "iobuf_set_options", 00:14:16.992 "params": { 00:14:16.992 "small_pool_count": 8192, 00:14:16.992 "large_pool_count": 1024, 00:14:16.992 "small_bufsize": 8192, 00:14:16.992 "large_bufsize": 135168 00:14:16.992 } 00:14:16.992 } 00:14:16.992 ] 00:14:16.992 }, 00:14:16.992 { 00:14:16.992 "subsystem": "sock", 00:14:16.992 "config": [ 00:14:16.992 { 00:14:16.992 "method": "sock_set_default_impl", 00:14:16.992 "params": { 00:14:16.992 "impl_name": "uring" 00:14:16.992 } 00:14:16.992 }, 00:14:16.992 { 00:14:16.992 "method": "sock_impl_set_options", 00:14:16.992 "params": { 00:14:16.992 "impl_name": "ssl", 00:14:16.992 "recv_buf_size": 4096, 00:14:16.992 "send_buf_size": 4096, 00:14:16.992 "enable_recv_pipe": true, 00:14:16.992 "enable_quickack": false, 00:14:16.992 "enable_placement_id": 0, 00:14:16.992 "enable_zerocopy_send_server": true, 00:14:16.992 "enable_zerocopy_send_client": false, 00:14:16.992 "zerocopy_threshold": 0, 00:14:16.992 "tls_version": 0, 00:14:16.992 "enable_ktls": false 00:14:16.992 } 00:14:16.992 }, 00:14:16.992 { 00:14:16.992 "method": "sock_impl_set_options", 00:14:16.992 "params": { 00:14:16.992 "impl_name": "posix", 00:14:16.992 "recv_buf_size": 2097152, 00:14:16.992 "send_buf_size": 2097152, 00:14:16.992 "enable_recv_pipe": true, 00:14:16.992 "enable_quickack": false, 00:14:16.992 "enable_placement_id": 0, 00:14:16.992 "enable_zerocopy_send_server": true, 00:14:16.992 "enable_zerocopy_send_client": false, 00:14:16.992 "zerocopy_threshold": 0, 00:14:16.992 "tls_version": 0, 00:14:16.992 "enable_ktls": false 00:14:16.992 } 00:14:16.992 }, 00:14:16.992 { 00:14:16.992 "method": "sock_impl_set_options", 00:14:16.992 "params": { 00:14:16.992 "impl_name": "uring", 00:14:16.992 "recv_buf_size": 2097152, 00:14:16.992 "send_buf_size": 2097152, 00:14:16.992 "enable_recv_pipe": true, 00:14:16.992 "enable_quickack": false, 00:14:16.992 "enable_placement_id": 0, 00:14:16.992 "enable_zerocopy_send_server": false, 00:14:16.992 "enable_zerocopy_send_client": false, 00:14:16.992 "zerocopy_threshold": 0, 00:14:16.992 "tls_version": 0, 00:14:16.992 "enable_ktls": false 00:14:16.992 } 00:14:16.992 } 00:14:16.992 ] 00:14:16.992 }, 00:14:16.992 { 00:14:16.992 "subsystem": "vmd", 00:14:16.992 "config": [] 00:14:16.992 }, 00:14:16.992 { 00:14:16.992 "subsystem": "accel", 00:14:16.992 "config": [ 00:14:16.992 { 00:14:16.992 "method": "accel_set_options", 00:14:16.993 "params": { 00:14:16.993 "small_cache_size": 128, 00:14:16.993 "large_cache_size": 16, 00:14:16.993 "task_count": 2048, 00:14:16.993 "sequence_count": 2048, 00:14:16.993 "buf_count": 2048 00:14:16.993 } 00:14:16.993 } 00:14:16.993 ] 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "subsystem": "bdev", 00:14:16.993 "config": [ 00:14:16.993 { 00:14:16.993 "method": "bdev_set_options", 00:14:16.993 "params": { 00:14:16.993 "bdev_io_pool_size": 65535, 00:14:16.993 "bdev_io_cache_size": 256, 00:14:16.993 "bdev_auto_examine": true, 00:14:16.993 "iobuf_small_cache_size": 128, 00:14:16.993 "iobuf_large_cache_size": 16 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "bdev_raid_set_options", 00:14:16.993 "params": { 00:14:16.993 "process_window_size_kb": 1024 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "bdev_iscsi_set_options", 00:14:16.993 "params": { 00:14:16.993 "timeout_sec": 30 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "bdev_nvme_set_options", 00:14:16.993 "params": { 00:14:16.993 "action_on_timeout": "none", 00:14:16.993 "timeout_us": 0, 00:14:16.993 "timeout_admin_us": 0, 00:14:16.993 "keep_alive_timeout_ms": 10000, 00:14:16.993 "arbitration_burst": 0, 00:14:16.993 "low_priority_weight": 0, 00:14:16.993 "medium_priority_weight": 0, 00:14:16.993 "high_priority_weight": 0, 00:14:16.993 "nvme_adminq_poll_period_us": 10000, 00:14:16.993 "nvme_ioq_poll_period_us": 0, 00:14:16.993 "io_queue_requests": 0, 00:14:16.993 "delay_cmd_submit": true, 00:14:16.993 "transport_retry_count": 4, 00:14:16.993 "bdev_retry_count": 3, 00:14:16.993 "transport_ack_timeout": 0, 00:14:16.993 "ctrlr_loss_timeout_sec": 0, 00:14:16.993 "reconnect_delay_sec": 0, 00:14:16.993 "fast_io_fail_timeout_sec": 0, 00:14:16.993 "disable_auto_failback": false, 00:14:16.993 "generate_uuids": false, 00:14:16.993 "transport_tos": 0, 00:14:16.993 "nvme_error_stat": false, 00:14:16.993 "rdma_srq_size": 0, 00:14:16.993 "io_path_stat": false, 00:14:16.993 "allow_accel_sequence": false, 00:14:16.993 "rdma_max_cq_size": 0, 00:14:16.993 "rdma_cm_event_timeout_ms": 0, 00:14:16.993 "dhchap_digests": [ 00:14:16.993 "sha256", 00:14:16.993 "sha384", 00:14:16.993 "sha512" 00:14:16.993 ], 00:14:16.993 "dhchap_dhgroups": [ 00:14:16.993 "null", 00:14:16.993 "ffdhe2048", 00:14:16.993 "ffdhe3072", 00:14:16.993 "ffdhe4096", 00:14:16.993 "ffdhe6144", 00:14:16.993 "ffdhe8192" 00:14:16.993 ] 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "bdev_nvme_set_hotplug", 00:14:16.993 "params": { 00:14:16.993 "period_us": 100000, 00:14:16.993 "enable": false 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "bdev_malloc_create", 00:14:16.993 "params": { 00:14:16.993 "name": "malloc0", 00:14:16.993 "num_blocks": 8192, 00:14:16.993 "block_size": 4096, 00:14:16.993 "physical_block_size": 4096, 00:14:16.993 "uuid": "e902b90a-bd43-4eeb-95a0-ed41d3d99740", 00:14:16.993 "optimal_io_boundary": 0 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "bdev_wait_for_examine" 00:14:16.993 } 00:14:16.993 ] 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "subsystem": "nbd", 00:14:16.993 "config": [] 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "subsystem": "scheduler", 00:14:16.993 "config": [ 00:14:16.993 { 00:14:16.993 "method": "framework_set_scheduler", 00:14:16.993 "params": { 00:14:16.993 "name": "static" 00:14:16.993 } 00:14:16.993 } 00:14:16.993 ] 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "subsystem": "nvmf", 00:14:16.993 "config": [ 00:14:16.993 { 00:14:16.993 "method": "nvmf_set_config", 00:14:16.993 "params": { 00:14:16.993 "discovery_filter": "match_any", 00:14:16.993 "admin_cmd_passthru": { 00:14:16.993 "identify_ctrlr": false 00:14:16.993 } 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "nvmf_set_max_subsystems", 00:14:16.993 "params": { 00:14:16.993 "max_subsystems": 1024 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "nvmf_set_crdt", 00:14:16.993 "params": { 00:14:16.993 "crdt1": 0, 00:14:16.993 "crdt2": 0, 00:14:16.993 "crdt3": 0 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "nvmf_create_transport", 00:14:16.993 "params": { 00:14:16.993 "trtype": "TCP", 00:14:16.993 "max_queue_depth": 128, 00:14:16.993 "max_io_qpairs_per_ctrlr": 127, 00:14:16.993 "in_capsule_data_size": 4096, 00:14:16.993 "max_io_size": 131072, 00:14:16.993 "io_unit_size": 131072, 00:14:16.993 "max_aq_depth": 128, 00:14:16.993 "num_shared_buffers": 511, 00:14:16.993 "buf_cache_size": 4294967295, 00:14:16.993 "dif_insert_or_strip": false, 00:14:16.993 "zcopy": false, 00:14:16.993 "c2h_success": false, 00:14:16.993 "sock_priority": 0, 00:14:16.993 "abort_timeout_sec": 1, 00:14:16.993 "ack_timeout": 0, 00:14:16.993 "data_wr_pool_size": 0 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "nvmf_create_subsystem", 00:14:16.993 "params": { 00:14:16.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.993 "allow_any_host": false, 00:14:16.993 "serial_number": "SPDK00000000000001", 00:14:16.993 "model_number": "SPDK bdev Controller", 00:14:16.993 "max_namespaces": 10, 00:14:16.993 "min_cntlid": 1, 00:14:16.993 "max_cntlid": 65519, 00:14:16.993 "ana_reporting": false 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "nvmf_subsystem_add_host", 00:14:16.993 "params": { 00:14:16.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.993 "host": "nqn.2016-06.io.spdk:host1", 00:14:16.993 "psk": "/tmp/tmp.Z7ZciVkTZZ" 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "nvmf_subsystem_add_ns", 00:14:16.993 "params": { 00:14:16.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.993 "namespace": { 00:14:16.993 "nsid": 1, 00:14:16.993 "bdev_name": "malloc0", 00:14:16.993 "nguid": "E902B90ABD434EEB95A0ED41D3D99740", 00:14:16.993 "uuid": "e902b90a-bd43-4eeb-95a0-ed41d3d99740", 00:14:16.993 "no_auto_visible": false 00:14:16.993 } 00:14:16.993 } 00:14:16.993 }, 00:14:16.993 { 00:14:16.993 "method": "nvmf_subsystem_add_listener", 00:14:16.993 "params": { 00:14:16.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.993 "listen_address": { 00:14:16.993 "trtype": "TCP", 00:14:16.993 "adrfam": "IPv4", 00:14:16.993 "traddr": "10.0.0.2", 00:14:16.993 "trsvcid": "4420" 00:14:16.993 }, 00:14:16.993 "secure_channel": true 00:14:16.993 } 00:14:16.993 } 00:14:16.993 ] 00:14:16.993 } 00:14:16.993 ] 00:14:16.993 }' 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73776 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73776 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73776 ']' 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.993 16:29:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.993 [2024-07-15 16:29:02.359828] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:16.994 [2024-07-15 16:29:02.359936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.994 [2024-07-15 16:29:02.495723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.251 [2024-07-15 16:29:02.618124] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.251 [2024-07-15 16:29:02.618193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.251 [2024-07-15 16:29:02.618204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.251 [2024-07-15 16:29:02.618213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.251 [2024-07-15 16:29:02.618220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.251 [2024-07-15 16:29:02.618312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.251 [2024-07-15 16:29:02.787808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:17.509 [2024-07-15 16:29:02.857831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.509 [2024-07-15 16:29:02.873787] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:17.509 [2024-07-15 16:29:02.889775] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.509 [2024-07-15 16:29:02.889984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73807 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73807 /var/tmp/bdevperf.sock 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73807 ']' 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.076 16:29:03 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:18.076 "subsystems": [ 00:14:18.076 { 00:14:18.076 "subsystem": "keyring", 00:14:18.076 "config": [] 00:14:18.076 }, 00:14:18.076 { 00:14:18.076 "subsystem": "iobuf", 00:14:18.076 "config": [ 00:14:18.076 { 00:14:18.076 "method": "iobuf_set_options", 00:14:18.076 "params": { 00:14:18.076 "small_pool_count": 8192, 00:14:18.076 "large_pool_count": 1024, 00:14:18.076 "small_bufsize": 8192, 00:14:18.076 "large_bufsize": 135168 00:14:18.076 } 00:14:18.076 } 00:14:18.076 ] 00:14:18.076 }, 00:14:18.076 { 00:14:18.076 "subsystem": "sock", 00:14:18.076 "config": [ 00:14:18.076 { 00:14:18.076 "method": "sock_set_default_impl", 00:14:18.076 "params": { 00:14:18.076 "impl_name": "uring" 00:14:18.076 } 00:14:18.076 }, 00:14:18.076 { 00:14:18.076 "method": "sock_impl_set_options", 00:14:18.076 "params": { 00:14:18.076 "impl_name": "ssl", 00:14:18.076 "recv_buf_size": 4096, 00:14:18.076 "send_buf_size": 4096, 00:14:18.076 "enable_recv_pipe": true, 00:14:18.076 "enable_quickack": false, 00:14:18.076 "enable_placement_id": 0, 00:14:18.076 "enable_zerocopy_send_server": true, 00:14:18.076 "enable_zerocopy_send_client": false, 00:14:18.076 "zerocopy_threshold": 0, 00:14:18.076 "tls_version": 0, 00:14:18.076 "enable_ktls": false 00:14:18.076 } 00:14:18.076 }, 00:14:18.076 { 00:14:18.077 "method": "sock_impl_set_options", 00:14:18.077 "params": { 00:14:18.077 "impl_name": "posix", 00:14:18.077 "recv_buf_size": 2097152, 00:14:18.077 "send_buf_size": 2097152, 00:14:18.077 "enable_recv_pipe": true, 00:14:18.077 "enable_quickack": false, 00:14:18.077 "enable_placement_id": 0, 00:14:18.077 "enable_zerocopy_send_server": true, 00:14:18.077 "enable_zerocopy_send_client": false, 00:14:18.077 "zerocopy_threshold": 0, 00:14:18.077 "tls_version": 0, 00:14:18.077 "enable_ktls": false 00:14:18.077 } 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "method": "sock_impl_set_options", 00:14:18.077 "params": { 00:14:18.077 "impl_name": "uring", 00:14:18.077 "recv_buf_size": 2097152, 00:14:18.077 "send_buf_size": 2097152, 00:14:18.077 "enable_recv_pipe": true, 00:14:18.077 "enable_quickack": false, 00:14:18.077 "enable_placement_id": 0, 00:14:18.077 "enable_zerocopy_send_server": false, 00:14:18.077 "enable_zerocopy_send_client": false, 00:14:18.077 "zerocopy_threshold": 0, 00:14:18.077 "tls_version": 0, 00:14:18.077 "enable_ktls": false 00:14:18.077 } 00:14:18.077 } 00:14:18.077 ] 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "subsystem": "vmd", 00:14:18.077 "config": [] 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "subsystem": "accel", 00:14:18.077 "config": [ 00:14:18.077 { 00:14:18.077 "method": "accel_set_options", 00:14:18.077 "params": { 00:14:18.077 "small_cache_size": 128, 00:14:18.077 "large_cache_size": 16, 00:14:18.077 "task_count": 2048, 00:14:18.077 "sequence_count": 2048, 00:14:18.077 "buf_count": 2048 00:14:18.077 } 00:14:18.077 } 00:14:18.077 ] 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "subsystem": "bdev", 00:14:18.077 "config": [ 00:14:18.077 { 00:14:18.077 "method": "bdev_set_options", 00:14:18.077 "params": { 00:14:18.077 "bdev_io_pool_size": 65535, 00:14:18.077 "bdev_io_cache_size": 256, 00:14:18.077 "bdev_auto_examine": true, 00:14:18.077 "iobuf_small_cache_size": 128, 00:14:18.077 "iobuf_large_cache_size": 16 00:14:18.077 } 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "method": "bdev_raid_set_options", 00:14:18.077 "params": { 00:14:18.077 "process_window_size_kb": 1024 00:14:18.077 } 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "method": "bdev_iscsi_set_options", 00:14:18.077 "params": { 00:14:18.077 "timeout_sec": 30 00:14:18.077 } 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "method": "bdev_nvme_set_options", 00:14:18.077 "params": { 00:14:18.077 "action_on_timeout": "none", 00:14:18.077 "timeout_us": 0, 00:14:18.077 "timeout_admin_us": 0, 00:14:18.077 "keep_alive_timeout_ms": 10000, 00:14:18.077 "arbitration_burst": 0, 00:14:18.077 "low_priority_weight": 0, 00:14:18.077 "medium_priority_weight": 0, 00:14:18.077 "high_priority_weight": 0, 00:14:18.077 "nvme_adminq_poll_period_us": 10000, 00:14:18.077 "nvme_ioq_poll_period_us": 0, 00:14:18.077 "io_queue_requests": 512, 00:14:18.077 "delay_cmd_submit": true, 00:14:18.077 "transport_retry_count": 4, 00:14:18.077 "bdev_retry_count": 3, 00:14:18.077 "transport_ack_timeout": 0, 00:14:18.077 "ctrlr_loss_timeout_sec": 0, 00:14:18.077 "reconnect_delay_sec": 0, 00:14:18.077 "fast_io_fail_timeout_sec": 0, 00:14:18.077 "disable_auto_failback": false, 00:14:18.077 "generate_uuids": false, 00:14:18.077 "transport_tos": 0, 00:14:18.077 "nvme_error_stat": false, 00:14:18.077 "rdma_srq_size": 0, 00:14:18.077 "io_path_stat": false, 00:14:18.077 "allow_accel_sequence": false, 00:14:18.077 "rdma_max_cq_size": 0, 00:14:18.077 "rdma_cm_event_timeout_ms": 0, 00:14:18.077 "dhchap_digests": [ 00:14:18.077 "sha256", 00:14:18.077 "sha384", 00:14:18.077 "sha512" 00:14:18.077 ], 00:14:18.077 "dhchap_dhgroups": [ 00:14:18.077 "null", 00:14:18.077 "ffdhe2048", 00:14:18.077 "ffdhe3072", 00:14:18.077 "ffdhe4096", 00:14:18.077 "ffdhe6144", 00:14:18.077 "ffdhe8192" 00:14:18.077 ] 00:14:18.077 } 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "method": "bdev_nvme_attach_controller", 00:14:18.077 "params": { 00:14:18.077 "name": "TLSTEST", 00:14:18.077 "trtype": "TCP", 00:14:18.077 "adrfam": "IPv4", 00:14:18.077 "traddr": "10.0.0.2", 00:14:18.077 "trsvcid": "4420", 00:14:18.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.077 "prchk_reftag": false, 00:14:18.077 "prchk_guard": false, 00:14:18.077 "ctrlr_loss_timeout_sec": 0, 00:14:18.077 "reconnect_delay_sec": 0, 00:14:18.077 "fast_io_fail_timeout_sec": 0, 00:14:18.077 "psk": "/tmp/tmp.Z7ZciVkTZZ", 00:14:18.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.077 "hdgst": false, 00:14:18.077 "ddgst": false 00:14:18.077 } 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "method": "bdev_nvme_set_hotplug", 00:14:18.077 "params": { 00:14:18.077 "period_us": 100000, 00:14:18.077 "enable": false 00:14:18.077 } 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "method": "bdev_wait_for_examine" 00:14:18.077 } 00:14:18.077 ] 00:14:18.077 }, 00:14:18.077 { 00:14:18.077 "subsystem": "nbd", 00:14:18.077 "config": [] 00:14:18.077 } 00:14:18.077 ] 00:14:18.077 }' 00:14:18.077 [2024-07-15 16:29:03.406896] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:18.077 [2024-07-15 16:29:03.407440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73807 ] 00:14:18.077 [2024-07-15 16:29:03.543520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.335 [2024-07-15 16:29:03.665909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.335 [2024-07-15 16:29:03.802594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.335 [2024-07-15 16:29:03.840696] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.335 [2024-07-15 16:29:03.840833] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:18.903 16:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.903 16:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:18.903 16:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:19.162 Running I/O for 10 seconds... 00:14:29.137 00:14:29.137 Latency(us) 00:14:29.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.137 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:29.137 Verification LBA range: start 0x0 length 0x2000 00:14:29.137 TLSTESTn1 : 10.02 3838.77 15.00 0.00 0.00 33278.03 7179.17 36938.47 00:14:29.137 =================================================================================================================== 00:14:29.137 Total : 3838.77 15.00 0.00 0.00 33278.03 7179.17 36938.47 00:14:29.137 0 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73807 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73807 ']' 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73807 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73807 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:29.137 killing process with pid 73807 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73807' 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73807 00:14:29.137 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.137 00:14:29.137 Latency(us) 00:14:29.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.137 =================================================================================================================== 00:14:29.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.137 [2024-07-15 16:29:14.583640] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:29.137 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73807 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73776 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73776 ']' 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73776 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73776 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73776' 00:14:29.396 killing process with pid 73776 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73776 00:14:29.396 [2024-07-15 16:29:14.922121] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:29.396 16:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73776 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73950 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73950 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73950 ']' 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.655 16:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.914 [2024-07-15 16:29:15.209424] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:29.914 [2024-07-15 16:29:15.209502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.914 [2024-07-15 16:29:15.349957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.914 [2024-07-15 16:29:15.461545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.914 [2024-07-15 16:29:15.461605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.914 [2024-07-15 16:29:15.461618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.914 [2024-07-15 16:29:15.461627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.914 [2024-07-15 16:29:15.461635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.914 [2024-07-15 16:29:15.461667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.173 [2024-07-15 16:29:15.518955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:30.739 16:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.739 16:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:30.739 16:29:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.739 16:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.739 16:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.740 16:29:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.740 16:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Z7ZciVkTZZ 00:14:30.740 16:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z7ZciVkTZZ 00:14:30.740 16:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:31.002 [2024-07-15 16:29:16.539313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.259 16:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:31.517 16:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:31.776 [2024-07-15 16:29:17.083406] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.776 [2024-07-15 16:29:17.083649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.776 16:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:32.035 malloc0 00:14:32.035 16:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z7ZciVkTZZ 00:14:32.293 [2024-07-15 16:29:17.807611] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74000 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74000 /var/tmp/bdevperf.sock 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74000 ']' 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.293 16:29:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.551 [2024-07-15 16:29:17.879961] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:32.551 [2024-07-15 16:29:17.880077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74000 ] 00:14:32.551 [2024-07-15 16:29:18.016087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.835 [2024-07-15 16:29:18.130692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.835 [2024-07-15 16:29:18.184617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:33.413 16:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.413 16:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:33.413 16:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z7ZciVkTZZ 00:14:33.671 16:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:33.930 [2024-07-15 16:29:19.370075] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.930 nvme0n1 00:14:33.930 16:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.189 Running I/O for 1 seconds... 00:14:35.124 00:14:35.124 Latency(us) 00:14:35.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.124 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:35.124 Verification LBA range: start 0x0 length 0x2000 00:14:35.124 nvme0n1 : 1.02 3762.17 14.70 0.00 0.00 33663.28 7089.80 24784.52 00:14:35.124 =================================================================================================================== 00:14:35.124 Total : 3762.17 14.70 0.00 0.00 33663.28 7089.80 24784.52 00:14:35.124 0 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74000 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74000 ']' 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74000 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74000 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:35.124 killing process with pid 74000 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74000' 00:14:35.124 Received shutdown signal, test time was about 1.000000 seconds 00:14:35.124 00:14:35.124 Latency(us) 00:14:35.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.124 =================================================================================================================== 00:14:35.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74000 00:14:35.124 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74000 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73950 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73950 ']' 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73950 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73950 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:35.382 killing process with pid 73950 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73950' 00:14:35.382 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73950 00:14:35.382 [2024-07-15 16:29:20.913605] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:35.383 16:29:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73950 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74057 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74057 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74057 ']' 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.950 16:29:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.950 [2024-07-15 16:29:21.320754] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:35.950 [2024-07-15 16:29:21.320895] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.950 [2024-07-15 16:29:21.460403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.227 [2024-07-15 16:29:21.614222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.227 [2024-07-15 16:29:21.614286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.227 [2024-07-15 16:29:21.614298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.227 [2024-07-15 16:29:21.614307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.228 [2024-07-15 16:29:21.614317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.228 [2024-07-15 16:29:21.614355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.228 [2024-07-15 16:29:21.694234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:36.797 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.797 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:36.797 16:29:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.797 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:36.797 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.055 16:29:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.055 16:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:37.055 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.056 [2024-07-15 16:29:22.388881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.056 malloc0 00:14:37.056 [2024-07-15 16:29:22.423489] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.056 [2024-07-15 16:29:22.423732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74089 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74089 /var/tmp/bdevperf.sock 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74089 ']' 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.056 16:29:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.056 [2024-07-15 16:29:22.505751] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:37.056 [2024-07-15 16:29:22.505848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74089 ] 00:14:37.314 [2024-07-15 16:29:22.641352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.314 [2024-07-15 16:29:22.775822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.314 [2024-07-15 16:29:22.836299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:38.246 16:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.246 16:29:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:38.246 16:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z7ZciVkTZZ 00:14:38.504 16:29:23 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:38.504 [2024-07-15 16:29:24.048048] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.763 nvme0n1 00:14:38.763 16:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.763 Running I/O for 1 seconds... 00:14:39.754 00:14:39.754 Latency(us) 00:14:39.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.754 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:39.755 Verification LBA range: start 0x0 length 0x2000 00:14:39.755 nvme0n1 : 1.02 3773.16 14.74 0.00 0.00 33536.85 9472.93 23116.33 00:14:39.755 =================================================================================================================== 00:14:39.755 Total : 3773.16 14.74 0.00 0.00 33536.85 9472.93 23116.33 00:14:39.755 0 00:14:39.755 16:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:39.755 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.755 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.013 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.013 16:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:40.013 "subsystems": [ 00:14:40.013 { 00:14:40.013 "subsystem": "keyring", 00:14:40.013 "config": [ 00:14:40.013 { 00:14:40.013 "method": "keyring_file_add_key", 00:14:40.013 "params": { 00:14:40.013 "name": "key0", 00:14:40.013 "path": "/tmp/tmp.Z7ZciVkTZZ" 00:14:40.013 } 00:14:40.013 } 00:14:40.013 ] 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "subsystem": "iobuf", 00:14:40.013 "config": [ 00:14:40.013 { 00:14:40.013 "method": "iobuf_set_options", 00:14:40.013 "params": { 00:14:40.013 "small_pool_count": 8192, 00:14:40.013 "large_pool_count": 1024, 00:14:40.013 "small_bufsize": 8192, 00:14:40.013 "large_bufsize": 135168 00:14:40.013 } 00:14:40.013 } 00:14:40.013 ] 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "subsystem": "sock", 00:14:40.013 "config": [ 00:14:40.013 { 00:14:40.013 "method": "sock_set_default_impl", 00:14:40.013 "params": { 00:14:40.013 "impl_name": "uring" 00:14:40.013 } 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "method": "sock_impl_set_options", 00:14:40.013 "params": { 00:14:40.013 "impl_name": "ssl", 00:14:40.013 "recv_buf_size": 4096, 00:14:40.013 "send_buf_size": 4096, 00:14:40.013 "enable_recv_pipe": true, 00:14:40.013 "enable_quickack": false, 00:14:40.013 "enable_placement_id": 0, 00:14:40.013 "enable_zerocopy_send_server": true, 00:14:40.013 "enable_zerocopy_send_client": false, 00:14:40.013 "zerocopy_threshold": 0, 00:14:40.013 "tls_version": 0, 00:14:40.013 "enable_ktls": false 00:14:40.013 } 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "method": "sock_impl_set_options", 00:14:40.013 "params": { 00:14:40.013 "impl_name": "posix", 00:14:40.013 "recv_buf_size": 2097152, 00:14:40.013 "send_buf_size": 2097152, 00:14:40.013 "enable_recv_pipe": true, 00:14:40.013 "enable_quickack": false, 00:14:40.013 "enable_placement_id": 0, 00:14:40.013 "enable_zerocopy_send_server": true, 00:14:40.013 "enable_zerocopy_send_client": false, 00:14:40.013 "zerocopy_threshold": 0, 00:14:40.013 "tls_version": 0, 00:14:40.013 "enable_ktls": false 00:14:40.013 } 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "method": "sock_impl_set_options", 00:14:40.013 "params": { 00:14:40.013 "impl_name": "uring", 00:14:40.013 "recv_buf_size": 2097152, 00:14:40.013 "send_buf_size": 2097152, 00:14:40.013 "enable_recv_pipe": true, 00:14:40.013 "enable_quickack": false, 00:14:40.013 "enable_placement_id": 0, 00:14:40.013 "enable_zerocopy_send_server": false, 00:14:40.013 "enable_zerocopy_send_client": false, 00:14:40.013 "zerocopy_threshold": 0, 00:14:40.013 "tls_version": 0, 00:14:40.013 "enable_ktls": false 00:14:40.013 } 00:14:40.013 } 00:14:40.013 ] 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "subsystem": "vmd", 00:14:40.013 "config": [] 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "subsystem": "accel", 00:14:40.013 "config": [ 00:14:40.013 { 00:14:40.013 "method": "accel_set_options", 00:14:40.013 "params": { 00:14:40.013 "small_cache_size": 128, 00:14:40.013 "large_cache_size": 16, 00:14:40.013 "task_count": 2048, 00:14:40.013 "sequence_count": 2048, 00:14:40.013 "buf_count": 2048 00:14:40.013 } 00:14:40.013 } 00:14:40.013 ] 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "subsystem": "bdev", 00:14:40.013 "config": [ 00:14:40.013 { 00:14:40.013 "method": "bdev_set_options", 00:14:40.013 "params": { 00:14:40.013 "bdev_io_pool_size": 65535, 00:14:40.013 "bdev_io_cache_size": 256, 00:14:40.013 "bdev_auto_examine": true, 00:14:40.013 "iobuf_small_cache_size": 128, 00:14:40.013 "iobuf_large_cache_size": 16 00:14:40.013 } 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "method": "bdev_raid_set_options", 00:14:40.013 "params": { 00:14:40.013 "process_window_size_kb": 1024 00:14:40.013 } 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "method": "bdev_iscsi_set_options", 00:14:40.013 "params": { 00:14:40.013 "timeout_sec": 30 00:14:40.013 } 00:14:40.013 }, 00:14:40.013 { 00:14:40.013 "method": "bdev_nvme_set_options", 00:14:40.013 "params": { 00:14:40.013 "action_on_timeout": "none", 00:14:40.013 "timeout_us": 0, 00:14:40.013 "timeout_admin_us": 0, 00:14:40.013 "keep_alive_timeout_ms": 10000, 00:14:40.013 "arbitration_burst": 0, 00:14:40.013 "low_priority_weight": 0, 00:14:40.013 "medium_priority_weight": 0, 00:14:40.013 "high_priority_weight": 0, 00:14:40.014 "nvme_adminq_poll_period_us": 10000, 00:14:40.014 "nvme_ioq_poll_period_us": 0, 00:14:40.014 "io_queue_requests": 0, 00:14:40.014 "delay_cmd_submit": true, 00:14:40.014 "transport_retry_count": 4, 00:14:40.014 "bdev_retry_count": 3, 00:14:40.014 "transport_ack_timeout": 0, 00:14:40.014 "ctrlr_loss_timeout_sec": 0, 00:14:40.014 "reconnect_delay_sec": 0, 00:14:40.014 "fast_io_fail_timeout_sec": 0, 00:14:40.014 "disable_auto_failback": false, 00:14:40.014 "generate_uuids": false, 00:14:40.014 "transport_tos": 0, 00:14:40.014 "nvme_error_stat": false, 00:14:40.014 "rdma_srq_size": 0, 00:14:40.014 "io_path_stat": false, 00:14:40.014 "allow_accel_sequence": false, 00:14:40.014 "rdma_max_cq_size": 0, 00:14:40.014 "rdma_cm_event_timeout_ms": 0, 00:14:40.014 "dhchap_digests": [ 00:14:40.014 "sha256", 00:14:40.014 "sha384", 00:14:40.014 "sha512" 00:14:40.014 ], 00:14:40.014 "dhchap_dhgroups": [ 00:14:40.014 "null", 00:14:40.014 "ffdhe2048", 00:14:40.014 "ffdhe3072", 00:14:40.014 "ffdhe4096", 00:14:40.014 "ffdhe6144", 00:14:40.014 "ffdhe8192" 00:14:40.014 ] 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "bdev_nvme_set_hotplug", 00:14:40.014 "params": { 00:14:40.014 "period_us": 100000, 00:14:40.014 "enable": false 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "bdev_malloc_create", 00:14:40.014 "params": { 00:14:40.014 "name": "malloc0", 00:14:40.014 "num_blocks": 8192, 00:14:40.014 "block_size": 4096, 00:14:40.014 "physical_block_size": 4096, 00:14:40.014 "uuid": "9f125491-b5e0-494b-adfe-2cd3d45d8eb4", 00:14:40.014 "optimal_io_boundary": 0 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "bdev_wait_for_examine" 00:14:40.014 } 00:14:40.014 ] 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "subsystem": "nbd", 00:14:40.014 "config": [] 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "subsystem": "scheduler", 00:14:40.014 "config": [ 00:14:40.014 { 00:14:40.014 "method": "framework_set_scheduler", 00:14:40.014 "params": { 00:14:40.014 "name": "static" 00:14:40.014 } 00:14:40.014 } 00:14:40.014 ] 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "subsystem": "nvmf", 00:14:40.014 "config": [ 00:14:40.014 { 00:14:40.014 "method": "nvmf_set_config", 00:14:40.014 "params": { 00:14:40.014 "discovery_filter": "match_any", 00:14:40.014 "admin_cmd_passthru": { 00:14:40.014 "identify_ctrlr": false 00:14:40.014 } 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "nvmf_set_max_subsystems", 00:14:40.014 "params": { 00:14:40.014 "max_subsystems": 1024 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "nvmf_set_crdt", 00:14:40.014 "params": { 00:14:40.014 "crdt1": 0, 00:14:40.014 "crdt2": 0, 00:14:40.014 "crdt3": 0 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "nvmf_create_transport", 00:14:40.014 "params": { 00:14:40.014 "trtype": "TCP", 00:14:40.014 "max_queue_depth": 128, 00:14:40.014 "max_io_qpairs_per_ctrlr": 127, 00:14:40.014 "in_capsule_data_size": 4096, 00:14:40.014 "max_io_size": 131072, 00:14:40.014 "io_unit_size": 131072, 00:14:40.014 "max_aq_depth": 128, 00:14:40.014 "num_shared_buffers": 511, 00:14:40.014 "buf_cache_size": 4294967295, 00:14:40.014 "dif_insert_or_strip": false, 00:14:40.014 "zcopy": false, 00:14:40.014 "c2h_success": false, 00:14:40.014 "sock_priority": 0, 00:14:40.014 "abort_timeout_sec": 1, 00:14:40.014 "ack_timeout": 0, 00:14:40.014 "data_wr_pool_size": 0 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "nvmf_create_subsystem", 00:14:40.014 "params": { 00:14:40.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.014 "allow_any_host": false, 00:14:40.014 "serial_number": "00000000000000000000", 00:14:40.014 "model_number": "SPDK bdev Controller", 00:14:40.014 "max_namespaces": 32, 00:14:40.014 "min_cntlid": 1, 00:14:40.014 "max_cntlid": 65519, 00:14:40.014 "ana_reporting": false 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "nvmf_subsystem_add_host", 00:14:40.014 "params": { 00:14:40.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.014 "host": "nqn.2016-06.io.spdk:host1", 00:14:40.014 "psk": "key0" 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "nvmf_subsystem_add_ns", 00:14:40.014 "params": { 00:14:40.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.014 "namespace": { 00:14:40.014 "nsid": 1, 00:14:40.014 "bdev_name": "malloc0", 00:14:40.014 "nguid": "9F125491B5E0494BADFE2CD3D45D8EB4", 00:14:40.014 "uuid": "9f125491-b5e0-494b-adfe-2cd3d45d8eb4", 00:14:40.014 "no_auto_visible": false 00:14:40.014 } 00:14:40.014 } 00:14:40.014 }, 00:14:40.014 { 00:14:40.014 "method": "nvmf_subsystem_add_listener", 00:14:40.014 "params": { 00:14:40.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.014 "listen_address": { 00:14:40.014 "trtype": "TCP", 00:14:40.014 "adrfam": "IPv4", 00:14:40.014 "traddr": "10.0.0.2", 00:14:40.014 "trsvcid": "4420" 00:14:40.014 }, 00:14:40.014 "secure_channel": true 00:14:40.014 } 00:14:40.014 } 00:14:40.014 ] 00:14:40.014 } 00:14:40.014 ] 00:14:40.014 }' 00:14:40.014 16:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:40.273 16:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:40.273 "subsystems": [ 00:14:40.273 { 00:14:40.273 "subsystem": "keyring", 00:14:40.273 "config": [ 00:14:40.273 { 00:14:40.273 "method": "keyring_file_add_key", 00:14:40.273 "params": { 00:14:40.273 "name": "key0", 00:14:40.273 "path": "/tmp/tmp.Z7ZciVkTZZ" 00:14:40.273 } 00:14:40.273 } 00:14:40.273 ] 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "subsystem": "iobuf", 00:14:40.273 "config": [ 00:14:40.273 { 00:14:40.273 "method": "iobuf_set_options", 00:14:40.273 "params": { 00:14:40.273 "small_pool_count": 8192, 00:14:40.273 "large_pool_count": 1024, 00:14:40.273 "small_bufsize": 8192, 00:14:40.273 "large_bufsize": 135168 00:14:40.273 } 00:14:40.273 } 00:14:40.273 ] 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "subsystem": "sock", 00:14:40.273 "config": [ 00:14:40.273 { 00:14:40.273 "method": "sock_set_default_impl", 00:14:40.273 "params": { 00:14:40.273 "impl_name": "uring" 00:14:40.273 } 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "method": "sock_impl_set_options", 00:14:40.273 "params": { 00:14:40.273 "impl_name": "ssl", 00:14:40.273 "recv_buf_size": 4096, 00:14:40.273 "send_buf_size": 4096, 00:14:40.273 "enable_recv_pipe": true, 00:14:40.273 "enable_quickack": false, 00:14:40.273 "enable_placement_id": 0, 00:14:40.273 "enable_zerocopy_send_server": true, 00:14:40.273 "enable_zerocopy_send_client": false, 00:14:40.273 "zerocopy_threshold": 0, 00:14:40.273 "tls_version": 0, 00:14:40.273 "enable_ktls": false 00:14:40.273 } 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "method": "sock_impl_set_options", 00:14:40.273 "params": { 00:14:40.273 "impl_name": "posix", 00:14:40.273 "recv_buf_size": 2097152, 00:14:40.273 "send_buf_size": 2097152, 00:14:40.273 "enable_recv_pipe": true, 00:14:40.273 "enable_quickack": false, 00:14:40.273 "enable_placement_id": 0, 00:14:40.273 "enable_zerocopy_send_server": true, 00:14:40.273 "enable_zerocopy_send_client": false, 00:14:40.273 "zerocopy_threshold": 0, 00:14:40.273 "tls_version": 0, 00:14:40.273 "enable_ktls": false 00:14:40.273 } 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "method": "sock_impl_set_options", 00:14:40.273 "params": { 00:14:40.273 "impl_name": "uring", 00:14:40.273 "recv_buf_size": 2097152, 00:14:40.273 "send_buf_size": 2097152, 00:14:40.273 "enable_recv_pipe": true, 00:14:40.273 "enable_quickack": false, 00:14:40.273 "enable_placement_id": 0, 00:14:40.273 "enable_zerocopy_send_server": false, 00:14:40.273 "enable_zerocopy_send_client": false, 00:14:40.273 "zerocopy_threshold": 0, 00:14:40.273 "tls_version": 0, 00:14:40.273 "enable_ktls": false 00:14:40.273 } 00:14:40.273 } 00:14:40.273 ] 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "subsystem": "vmd", 00:14:40.273 "config": [] 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "subsystem": "accel", 00:14:40.273 "config": [ 00:14:40.273 { 00:14:40.273 "method": "accel_set_options", 00:14:40.273 "params": { 00:14:40.273 "small_cache_size": 128, 00:14:40.273 "large_cache_size": 16, 00:14:40.273 "task_count": 2048, 00:14:40.273 "sequence_count": 2048, 00:14:40.273 "buf_count": 2048 00:14:40.273 } 00:14:40.273 } 00:14:40.273 ] 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "subsystem": "bdev", 00:14:40.273 "config": [ 00:14:40.273 { 00:14:40.273 "method": "bdev_set_options", 00:14:40.273 "params": { 00:14:40.273 "bdev_io_pool_size": 65535, 00:14:40.273 "bdev_io_cache_size": 256, 00:14:40.273 "bdev_auto_examine": true, 00:14:40.273 "iobuf_small_cache_size": 128, 00:14:40.273 "iobuf_large_cache_size": 16 00:14:40.273 } 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "method": "bdev_raid_set_options", 00:14:40.273 "params": { 00:14:40.273 "process_window_size_kb": 1024 00:14:40.273 } 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "method": "bdev_iscsi_set_options", 00:14:40.273 "params": { 00:14:40.273 "timeout_sec": 30 00:14:40.273 } 00:14:40.273 }, 00:14:40.273 { 00:14:40.273 "method": "bdev_nvme_set_options", 00:14:40.273 "params": { 00:14:40.273 "action_on_timeout": "none", 00:14:40.273 "timeout_us": 0, 00:14:40.273 "timeout_admin_us": 0, 00:14:40.273 "keep_alive_timeout_ms": 10000, 00:14:40.273 "arbitration_burst": 0, 00:14:40.273 "low_priority_weight": 0, 00:14:40.273 "medium_priority_weight": 0, 00:14:40.273 "high_priority_weight": 0, 00:14:40.273 "nvme_adminq_poll_period_us": 10000, 00:14:40.273 "nvme_ioq_poll_period_us": 0, 00:14:40.273 "io_queue_requests": 512, 00:14:40.273 "delay_cmd_submit": true, 00:14:40.273 "transport_retry_count": 4, 00:14:40.273 "bdev_retry_count": 3, 00:14:40.273 "transport_ack_timeout": 0, 00:14:40.273 "ctrlr_loss_timeout_sec": 0, 00:14:40.273 "reconnect_delay_sec": 0, 00:14:40.273 "fast_io_fail_timeout_sec": 0, 00:14:40.273 "disable_auto_failback": false, 00:14:40.273 "generate_uuids": false, 00:14:40.273 "transport_tos": 0, 00:14:40.273 "nvme_error_stat": false, 00:14:40.273 "rdma_srq_size": 0, 00:14:40.273 "io_path_stat": false, 00:14:40.273 "allow_accel_sequence": false, 00:14:40.273 "rdma_max_cq_size": 0, 00:14:40.274 "rdma_cm_event_timeout_ms": 0, 00:14:40.274 "dhchap_digests": [ 00:14:40.274 "sha256", 00:14:40.274 "sha384", 00:14:40.274 "sha512" 00:14:40.274 ], 00:14:40.274 "dhchap_dhgroups": [ 00:14:40.274 "null", 00:14:40.274 "ffdhe2048", 00:14:40.274 "ffdhe3072", 00:14:40.274 "ffdhe4096", 00:14:40.274 "ffdhe6144", 00:14:40.274 "ffdhe8192" 00:14:40.274 ] 00:14:40.274 } 00:14:40.274 }, 00:14:40.274 { 00:14:40.274 "method": "bdev_nvme_attach_controller", 00:14:40.274 "params": { 00:14:40.274 "name": "nvme0", 00:14:40.274 "trtype": "TCP", 00:14:40.274 "adrfam": "IPv4", 00:14:40.274 "traddr": "10.0.0.2", 00:14:40.274 "trsvcid": "4420", 00:14:40.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.274 "prchk_reftag": false, 00:14:40.274 "prchk_guard": false, 00:14:40.274 "ctrlr_loss_timeout_sec": 0, 00:14:40.274 "reconnect_delay_sec": 0, 00:14:40.274 "fast_io_fail_timeout_sec": 0, 00:14:40.274 "psk": "key0", 00:14:40.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.274 "hdgst": false, 00:14:40.274 "ddgst": false 00:14:40.274 } 00:14:40.274 }, 00:14:40.274 { 00:14:40.274 "method": "bdev_nvme_set_hotplug", 00:14:40.274 "params": { 00:14:40.274 "period_us": 100000, 00:14:40.274 "enable": false 00:14:40.274 } 00:14:40.274 }, 00:14:40.274 { 00:14:40.274 "method": "bdev_enable_histogram", 00:14:40.274 "params": { 00:14:40.274 "name": "nvme0n1", 00:14:40.274 "enable": true 00:14:40.274 } 00:14:40.274 }, 00:14:40.274 { 00:14:40.274 "method": "bdev_wait_for_examine" 00:14:40.274 } 00:14:40.274 ] 00:14:40.274 }, 00:14:40.274 { 00:14:40.274 "subsystem": "nbd", 00:14:40.274 "config": [] 00:14:40.274 } 00:14:40.274 ] 00:14:40.274 }' 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74089 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74089 ']' 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74089 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74089 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:40.274 killing process with pid 74089 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74089' 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74089 00:14:40.274 Received shutdown signal, test time was about 1.000000 seconds 00:14:40.274 00:14:40.274 Latency(us) 00:14:40.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.274 =================================================================================================================== 00:14:40.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.274 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74089 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74057 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74057 ']' 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74057 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74057 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:40.536 killing process with pid 74057 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74057' 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74057 00:14:40.536 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74057 00:14:40.800 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:40.800 16:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.800 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.800 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.800 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:40.800 "subsystems": [ 00:14:40.800 { 00:14:40.800 "subsystem": "keyring", 00:14:40.800 "config": [ 00:14:40.800 { 00:14:40.800 "method": "keyring_file_add_key", 00:14:40.800 "params": { 00:14:40.800 "name": "key0", 00:14:40.800 "path": "/tmp/tmp.Z7ZciVkTZZ" 00:14:40.800 } 00:14:40.800 } 00:14:40.800 ] 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "subsystem": "iobuf", 00:14:40.800 "config": [ 00:14:40.800 { 00:14:40.800 "method": "iobuf_set_options", 00:14:40.800 "params": { 00:14:40.800 "small_pool_count": 8192, 00:14:40.800 "large_pool_count": 1024, 00:14:40.800 "small_bufsize": 8192, 00:14:40.800 "large_bufsize": 135168 00:14:40.800 } 00:14:40.800 } 00:14:40.800 ] 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "subsystem": "sock", 00:14:40.800 "config": [ 00:14:40.800 { 00:14:40.800 "method": "sock_set_default_impl", 00:14:40.800 "params": { 00:14:40.800 "impl_name": "uring" 00:14:40.800 } 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "method": "sock_impl_set_options", 00:14:40.800 "params": { 00:14:40.800 "impl_name": "ssl", 00:14:40.800 "recv_buf_size": 4096, 00:14:40.800 "send_buf_size": 4096, 00:14:40.800 "enable_recv_pipe": true, 00:14:40.800 "enable_quickack": false, 00:14:40.800 "enable_placement_id": 0, 00:14:40.800 "enable_zerocopy_send_server": true, 00:14:40.800 "enable_zerocopy_send_client": false, 00:14:40.800 "zerocopy_threshold": 0, 00:14:40.800 "tls_version": 0, 00:14:40.800 "enable_ktls": false 00:14:40.800 } 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "method": "sock_impl_set_options", 00:14:40.800 "params": { 00:14:40.800 "impl_name": "posix", 00:14:40.800 "recv_buf_size": 2097152, 00:14:40.800 "send_buf_size": 2097152, 00:14:40.800 "enable_recv_pipe": true, 00:14:40.800 "enable_quickack": false, 00:14:40.800 "enable_placement_id": 0, 00:14:40.800 "enable_zerocopy_send_server": true, 00:14:40.800 "enable_zerocopy_send_client": false, 00:14:40.800 "zerocopy_threshold": 0, 00:14:40.800 "tls_version": 0, 00:14:40.800 "enable_ktls": false 00:14:40.800 } 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "method": "sock_impl_set_options", 00:14:40.800 "params": { 00:14:40.800 "impl_name": "uring", 00:14:40.800 "recv_buf_size": 2097152, 00:14:40.800 "send_buf_size": 2097152, 00:14:40.800 "enable_recv_pipe": true, 00:14:40.800 "enable_quickack": false, 00:14:40.800 "enable_placement_id": 0, 00:14:40.800 "enable_zerocopy_send_server": false, 00:14:40.800 "enable_zerocopy_send_client": false, 00:14:40.800 "zerocopy_threshold": 0, 00:14:40.800 "tls_version": 0, 00:14:40.800 "enable_ktls": false 00:14:40.800 } 00:14:40.800 } 00:14:40.800 ] 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "subsystem": "vmd", 00:14:40.800 "config": [] 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "subsystem": "accel", 00:14:40.800 "config": [ 00:14:40.800 { 00:14:40.800 "method": "accel_set_options", 00:14:40.800 "params": { 00:14:40.800 "small_cache_size": 128, 00:14:40.800 "large_cache_size": 16, 00:14:40.800 "task_count": 2048, 00:14:40.800 "sequence_count": 2048, 00:14:40.800 "buf_count": 2048 00:14:40.800 } 00:14:40.800 } 00:14:40.800 ] 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "subsystem": "bdev", 00:14:40.800 "config": [ 00:14:40.800 { 00:14:40.800 "method": "bdev_set_options", 00:14:40.800 "params": { 00:14:40.800 "bdev_io_pool_size": 65535, 00:14:40.800 "bdev_io_cache_size": 256, 00:14:40.800 "bdev_auto_examine": true, 00:14:40.800 "iobuf_small_cache_size": 128, 00:14:40.800 "iobuf_large_cache_size": 16 00:14:40.800 } 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "method": "bdev_raid_set_options", 00:14:40.800 "params": { 00:14:40.800 "process_window_size_kb": 1024 00:14:40.800 } 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "method": "bdev_iscsi_set_options", 00:14:40.800 "params": { 00:14:40.800 "timeout_sec": 30 00:14:40.800 } 00:14:40.800 }, 00:14:40.800 { 00:14:40.800 "method": "bdev_nvme_set_options", 00:14:40.800 "params": { 00:14:40.800 "action_on_timeout": "none", 00:14:40.800 "timeout_us": 0, 00:14:40.800 "timeout_admin_us": 0, 00:14:40.800 "keep_alive_timeout_ms": 10000, 00:14:40.800 "arbitration_burst": 0, 00:14:40.800 "low_priority_weight": 0, 00:14:40.800 "medium_priority_weight": 0, 00:14:40.800 "high_priority_weight": 0, 00:14:40.800 "nvme_adminq_poll_period_us": 10000, 00:14:40.800 "nvme_ioq_poll_period_us": 0, 00:14:40.800 "io_queue_requests": 0, 00:14:40.800 "delay_cmd_submit": true, 00:14:40.800 "transport_retry_count": 4, 00:14:40.800 "bdev_retry_count": 3, 00:14:40.800 "transport_ack_timeout": 0, 00:14:40.800 "ctrlr_loss_timeout_sec": 0, 00:14:40.800 "reconnect_delay_sec": 0, 00:14:40.800 "fast_io_fail_timeout_sec": 0, 00:14:40.800 "disable_auto_failback": false, 00:14:40.800 "generate_uuids": false, 00:14:40.800 "transport_tos": 0, 00:14:40.800 "nvme_error_stat": false, 00:14:40.800 "rdma_srq_size": 0, 00:14:40.800 "io_path_stat": false, 00:14:40.800 "allow_accel_sequence": false, 00:14:40.800 "rdma_max_cq_size": 0, 00:14:40.800 "rdma_cm_event_timeout_ms": 0, 00:14:40.800 "dhchap_digests": [ 00:14:40.800 "sha256", 00:14:40.800 "sha384", 00:14:40.800 "sha512" 00:14:40.800 ], 00:14:40.800 "dhchap_dhgroups": [ 00:14:40.800 "null", 00:14:40.800 "ffdhe2048", 00:14:40.800 "ffdhe3072", 00:14:40.800 "ffdhe4096", 00:14:40.800 "ffdhe6144", 00:14:40.801 "ffdhe8192" 00:14:40.801 ] 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "bdev_nvme_set_hotplug", 00:14:40.801 "params": { 00:14:40.801 "period_us": 100000, 00:14:40.801 "enable": false 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "bdev_malloc_create", 00:14:40.801 "params": { 00:14:40.801 "name": "malloc0", 00:14:40.801 "num_blocks": 8192, 00:14:40.801 "block_size": 4096, 00:14:40.801 "physical_block_size": 4096, 00:14:40.801 "uuid": "9f125491-b5e0-494b-adfe-2cd3d45d8eb4", 00:14:40.801 "optimal_io_boundary": 0 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "bdev_wait_for_examine" 00:14:40.801 } 00:14:40.801 ] 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "subsystem": "nbd", 00:14:40.801 "config": [] 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "subsystem": "scheduler", 00:14:40.801 "config": [ 00:14:40.801 { 00:14:40.801 "method": "framework_set_scheduler", 00:14:40.801 "params": { 00:14:40.801 "name": "static" 00:14:40.801 } 00:14:40.801 } 00:14:40.801 ] 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "subsystem": "nvmf", 00:14:40.801 "config": [ 00:14:40.801 { 00:14:40.801 "method": "nvmf_set_config", 00:14:40.801 "params": { 00:14:40.801 "discovery_filter": "match_any", 00:14:40.801 "admin_cmd_passthru": { 00:14:40.801 "identify_ctrlr": false 00:14:40.801 } 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "nvmf_set_max_subsystems", 00:14:40.801 "params": { 00:14:40.801 "max_subsystems": 1024 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "nvmf_set_crdt", 00:14:40.801 "params": { 00:14:40.801 "crdt1": 0, 00:14:40.801 "crdt2": 0, 00:14:40.801 "crdt3": 0 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "nvmf_create_transport", 00:14:40.801 "params": { 00:14:40.801 "trtype": "TCP", 00:14:40.801 "max_queue_depth": 128, 00:14:40.801 "max_io_qpairs_per_ctrlr": 127, 00:14:40.801 "in_capsule_data_size": 4096, 00:14:40.801 "max_io_size": 131072, 00:14:40.801 "io_unit_size": 131072, 00:14:40.801 "max_aq_depth": 128, 00:14:40.801 "num_shared_buffers": 511, 00:14:40.801 "buf_cache_size": 4294967295, 00:14:40.801 "dif_insert_or_strip": false, 00:14:40.801 "zcopy": false, 00:14:40.801 "c2h_success": false, 00:14:40.801 "sock_priority": 0, 00:14:40.801 "abort_timeout_sec": 1, 00:14:40.801 "ack_timeout": 0, 00:14:40.801 "data_wr_pool_size": 0 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "nvmf_create_subsystem", 00:14:40.801 "params": { 00:14:40.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.801 "allow_any_host": false, 00:14:40.801 "serial_number": "00000000000000000000", 00:14:40.801 "model_number": "SPDK bdev Controller", 00:14:40.801 "max_namespaces": 32, 00:14:40.801 "min_cntlid": 1, 00:14:40.801 "max_cntlid": 65519, 00:14:40.801 "ana_reporting": false 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "nvmf_subsystem_add_host", 00:14:40.801 "params": { 00:14:40.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.801 "host": "nqn.2016-06.io.spdk:host1", 00:14:40.801 "psk": "key0" 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "nvmf_subsystem_add_ns", 00:14:40.801 "params": { 00:14:40.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.801 "namespace": { 00:14:40.801 "nsid": 1, 00:14:40.801 "bdev_name": "malloc0", 00:14:40.801 "nguid": "9F125491B5E0494BADFE2CD3D45D8EB4", 00:14:40.801 "uuid": "9f125491-b5e0-494b-adfe-2cd3d45d8eb4", 00:14:40.801 "no_auto_visible": false 00:14:40.801 } 00:14:40.801 } 00:14:40.801 }, 00:14:40.801 { 00:14:40.801 "method": "nvmf_subsystem_add_listener", 00:14:40.801 "params": { 00:14:40.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.801 "listen_address": { 00:14:40.801 "trtype": "TCP", 00:14:40.801 "adrfam": "IPv4", 00:14:40.801 "traddr": "10.0.0.2", 00:14:40.801 "trsvcid": "4420" 00:14:40.801 }, 00:14:40.801 "secure_channel": true 00:14:40.801 } 00:14:40.801 } 00:14:40.801 ] 00:14:40.801 } 00:14:40.801 ] 00:14:40.801 }' 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74144 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74144 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74144 ']' 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.801 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.062 [2024-07-15 16:29:26.362967] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:41.062 [2024-07-15 16:29:26.363059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.062 [2024-07-15 16:29:26.503411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.332 [2024-07-15 16:29:26.611372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.332 [2024-07-15 16:29:26.611432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.332 [2024-07-15 16:29:26.611443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.332 [2024-07-15 16:29:26.611451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.332 [2024-07-15 16:29:26.611459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.332 [2024-07-15 16:29:26.611549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.332 [2024-07-15 16:29:26.780120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.332 [2024-07-15 16:29:26.860854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.591 [2024-07-15 16:29:26.892782] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.591 [2024-07-15 16:29:26.893094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74181 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74181 /var/tmp/bdevperf.sock 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74181 ']' 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:41.861 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:41.861 "subsystems": [ 00:14:41.861 { 00:14:41.861 "subsystem": "keyring", 00:14:41.861 "config": [ 00:14:41.861 { 00:14:41.861 "method": "keyring_file_add_key", 00:14:41.861 "params": { 00:14:41.861 "name": "key0", 00:14:41.861 "path": "/tmp/tmp.Z7ZciVkTZZ" 00:14:41.861 } 00:14:41.861 } 00:14:41.861 ] 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "subsystem": "iobuf", 00:14:41.861 "config": [ 00:14:41.861 { 00:14:41.861 "method": "iobuf_set_options", 00:14:41.861 "params": { 00:14:41.861 "small_pool_count": 8192, 00:14:41.861 "large_pool_count": 1024, 00:14:41.861 "small_bufsize": 8192, 00:14:41.861 "large_bufsize": 135168 00:14:41.861 } 00:14:41.861 } 00:14:41.861 ] 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "subsystem": "sock", 00:14:41.861 "config": [ 00:14:41.861 { 00:14:41.861 "method": "sock_set_default_impl", 00:14:41.861 "params": { 00:14:41.861 "impl_name": "uring" 00:14:41.861 } 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "method": "sock_impl_set_options", 00:14:41.861 "params": { 00:14:41.861 "impl_name": "ssl", 00:14:41.861 "recv_buf_size": 4096, 00:14:41.861 "send_buf_size": 4096, 00:14:41.861 "enable_recv_pipe": true, 00:14:41.861 "enable_quickack": false, 00:14:41.861 "enable_placement_id": 0, 00:14:41.861 "enable_zerocopy_send_server": true, 00:14:41.861 "enable_zerocopy_send_client": false, 00:14:41.861 "zerocopy_threshold": 0, 00:14:41.861 "tls_version": 0, 00:14:41.861 "enable_ktls": false 00:14:41.861 } 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "method": "sock_impl_set_options", 00:14:41.861 "params": { 00:14:41.861 "impl_name": "posix", 00:14:41.861 "recv_buf_size": 2097152, 00:14:41.861 "send_buf_size": 2097152, 00:14:41.861 "enable_recv_pipe": true, 00:14:41.861 "enable_quickack": false, 00:14:41.861 "enable_placement_id": 0, 00:14:41.861 "enable_zerocopy_send_server": true, 00:14:41.861 "enable_zerocopy_send_client": false, 00:14:41.861 "zerocopy_threshold": 0, 00:14:41.861 "tls_version": 0, 00:14:41.861 "enable_ktls": false 00:14:41.861 } 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "method": "sock_impl_set_options", 00:14:41.861 "params": { 00:14:41.861 "impl_name": "uring", 00:14:41.861 "recv_buf_size": 2097152, 00:14:41.861 "send_buf_size": 2097152, 00:14:41.861 "enable_recv_pipe": true, 00:14:41.861 "enable_quickack": false, 00:14:41.861 "enable_placement_id": 0, 00:14:41.861 "enable_zerocopy_send_server": false, 00:14:41.861 "enable_zerocopy_send_client": false, 00:14:41.861 "zerocopy_threshold": 0, 00:14:41.861 "tls_version": 0, 00:14:41.861 "enable_ktls": false 00:14:41.861 } 00:14:41.861 } 00:14:41.861 ] 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "subsystem": "vmd", 00:14:41.861 "config": [] 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "subsystem": "accel", 00:14:41.861 "config": [ 00:14:41.861 { 00:14:41.861 "method": "accel_set_options", 00:14:41.861 "params": { 00:14:41.861 "small_cache_size": 128, 00:14:41.861 "large_cache_size": 16, 00:14:41.861 "task_count": 2048, 00:14:41.861 "sequence_count": 2048, 00:14:41.861 "buf_count": 2048 00:14:41.861 } 00:14:41.861 } 00:14:41.861 ] 00:14:41.861 }, 00:14:41.861 { 00:14:41.861 "subsystem": "bdev", 00:14:41.861 "config": [ 00:14:41.861 { 00:14:41.861 "method": "bdev_set_options", 00:14:41.861 "params": { 00:14:41.861 "bdev_io_pool_size": 65535, 00:14:41.862 "bdev_io_cache_size": 256, 00:14:41.862 "bdev_auto_examine": true, 00:14:41.862 "iobuf_small_cache_size": 128, 00:14:41.862 "iobuf_large_cache_size": 16 00:14:41.862 } 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "method": "bdev_raid_set_options", 00:14:41.862 "params": { 00:14:41.862 "process_window_size_kb": 1024 00:14:41.862 } 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "method": "bdev_iscsi_set_options", 00:14:41.862 "params": { 00:14:41.862 "timeout_sec": 30 00:14:41.862 } 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "method": "bdev_nvme_set_options", 00:14:41.862 "params": { 00:14:41.862 "action_on_timeout": "none", 00:14:41.862 "timeout_us": 0, 00:14:41.862 "timeout_admin_us": 0, 00:14:41.862 "keep_alive_timeout_ms": 10000, 00:14:41.862 "arbitration_burst": 0, 00:14:41.862 "low_priority_weight": 0, 00:14:41.862 "medium_priority_weight": 0, 00:14:41.862 "high_priority_weight": 0, 00:14:41.862 "nvme_adminq_poll_period_us": 10000, 00:14:41.862 "nvme_ioq_poll_period_us": 0, 00:14:41.862 "io_queue_requests": 512, 00:14:41.862 "delay_cmd_submit": true, 00:14:41.862 "transport_retry_count": 4, 00:14:41.862 "bdev_retry_count": 3, 00:14:41.862 "transport_ack_timeout": 0, 00:14:41.862 "ctrlr_loss_timeout_sec": 0, 00:14:41.862 "reconnect_delay_sec": 0, 00:14:41.862 "fast_io_fail_timeout_sec": 0, 00:14:41.862 "disable_auto_failback": false, 00:14:41.862 "generate_uuids": false, 00:14:41.862 "transport_tos": 0, 00:14:41.862 "nvme_error_stat": false, 00:14:41.862 "rdma_srq_size": 0, 00:14:41.862 "io_path_stat": false, 00:14:41.862 "allow_accel_sequence": false, 00:14:41.862 "rdma_max_cq_size": 0, 00:14:41.862 "rdma_cm_event_timeout_ms": 0, 00:14:41.862 "dhchap_digests": [ 00:14:41.862 "sha256", 00:14:41.862 "sha384", 00:14:41.862 "sha512" 00:14:41.862 ], 00:14:41.862 "dhchap_dhgroups": [ 00:14:41.862 "null", 00:14:41.862 "ffdhe2048", 00:14:41.862 "ffdhe3072", 00:14:41.862 "ffdhe4096", 00:14:41.862 "ffdhe6144", 00:14:41.862 "ffdhe8192" 00:14:41.862 ] 00:14:41.862 } 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "method": "bdev_nvme_attach_controller", 00:14:41.862 "params": { 00:14:41.862 "name": "nvme0", 00:14:41.862 "trtype": "TCP", 00:14:41.862 "adrfam": "IPv4", 00:14:41.862 "traddr": "10.0.0.2", 00:14:41.862 "trsvcid": "4420", 00:14:41.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.862 "prchk_reftag": false, 00:14:41.862 "prchk_guard": false, 00:14:41.862 "ctrlr_loss_timeout_sec": 0, 00:14:41.862 "reconnect_delay_sec": 0, 00:14:41.862 "fast_io_fail_timeout_sec": 0, 00:14:41.862 "psk": "key0", 00:14:41.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.862 "hdgst": false, 00:14:41.862 "ddgst": false 00:14:41.862 } 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "method": "bdev_nvme_set_hotplug", 00:14:41.862 "params": { 00:14:41.862 "period_us": 100000, 00:14:41.862 "enable": false 00:14:41.862 } 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "method": "bdev_enable_histogram", 00:14:41.862 "params": { 00:14:41.862 "name": "nvme0n1", 00:14:41.862 "enable": true 00:14:41.862 } 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "method": "bdev_wait_for_examine" 00:14:41.862 } 00:14:41.862 ] 00:14:41.862 }, 00:14:41.862 { 00:14:41.862 "subsystem": "nbd", 00:14:41.862 "config": [] 00:14:41.862 } 00:14:41.862 ] 00:14:41.862 }' 00:14:42.120 [2024-07-15 16:29:27.414772] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:42.120 [2024-07-15 16:29:27.414949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74181 ] 00:14:42.120 [2024-07-15 16:29:27.553640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.389 [2024-07-15 16:29:27.675828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.389 [2024-07-15 16:29:27.815377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.389 [2024-07-15 16:29:27.861719] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.957 16:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.957 16:29:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:42.957 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:42.957 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:43.215 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.215 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.473 Running I/O for 1 seconds... 00:14:44.485 00:14:44.485 Latency(us) 00:14:44.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.485 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:44.485 Verification LBA range: start 0x0 length 0x2000 00:14:44.485 nvme0n1 : 1.04 2954.56 11.54 0.00 0.00 42694.96 12451.84 32648.84 00:14:44.485 =================================================================================================================== 00:14:44.485 Total : 2954.56 11.54 0.00 0.00 42694.96 12451.84 32648.84 00:14:44.485 0 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:44.485 16:29:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:44.485 nvmf_trace.0 00:14:44.485 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:44.485 16:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74181 00:14:44.485 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74181 ']' 00:14:44.485 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74181 00:14:44.485 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.485 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.485 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74181 00:14:44.743 killing process with pid 74181 00:14:44.743 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.743 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.743 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74181' 00:14:44.743 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74181 00:14:44.743 Received shutdown signal, test time was about 1.000000 seconds 00:14:44.743 00:14:44.743 Latency(us) 00:14:44.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.743 =================================================================================================================== 00:14:44.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.743 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74181 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.001 rmmod nvme_tcp 00:14:45.001 rmmod nvme_fabrics 00:14:45.001 rmmod nvme_keyring 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74144 ']' 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74144 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74144 ']' 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74144 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74144 00:14:45.001 killing process with pid 74144 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74144' 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74144 00:14:45.001 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74144 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.259 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.518 16:29:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:45.518 16:29:30 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BGQrpV4wXT /tmp/tmp.msJsERpKGe /tmp/tmp.Z7ZciVkTZZ 00:14:45.518 00:14:45.518 real 1m28.178s 00:14:45.518 user 2m20.821s 00:14:45.518 sys 0m27.743s 00:14:45.518 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.518 16:29:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.518 ************************************ 00:14:45.518 END TEST nvmf_tls 00:14:45.518 ************************************ 00:14:45.518 16:29:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:45.518 16:29:30 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:45.518 16:29:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:45.518 16:29:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:45.518 16:29:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:45.518 ************************************ 00:14:45.518 START TEST nvmf_fips 00:14:45.518 ************************************ 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:45.518 * Looking for test storage... 00:14:45.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.518 16:29:30 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:45.519 16:29:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:45.519 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:45.778 Error setting digest 00:14:45.778 0012BE0FBD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:45.778 0012BE0FBD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:45.778 Cannot find device "nvmf_tgt_br" 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.778 Cannot find device "nvmf_tgt_br2" 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:45.778 Cannot find device "nvmf_tgt_br" 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:45.778 Cannot find device "nvmf_tgt_br2" 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:45.778 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.036 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:46.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:14:46.037 00:14:46.037 --- 10.0.0.2 ping statistics --- 00:14:46.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.037 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:46.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:46.037 00:14:46.037 --- 10.0.0.3 ping statistics --- 00:14:46.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.037 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:14:46.037 00:14:46.037 --- 10.0.0.1 ping statistics --- 00:14:46.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.037 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74450 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74450 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74450 ']' 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.037 16:29:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:46.295 [2024-07-15 16:29:31.646794] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:46.295 [2024-07-15 16:29:31.646902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.295 [2024-07-15 16:29:31.785580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.558 [2024-07-15 16:29:31.911431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.558 [2024-07-15 16:29:31.911484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.558 [2024-07-15 16:29:31.911498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.558 [2024-07-15 16:29:31.911509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.558 [2024-07-15 16:29:31.911518] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.558 [2024-07-15 16:29:31.911549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.558 [2024-07-15 16:29:31.972320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:47.126 16:29:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.126 16:29:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:47.126 16:29:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.126 16:29:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.126 16:29:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.397 16:29:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:47.655 [2024-07-15 16:29:32.961844] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.655 [2024-07-15 16:29:32.977815] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.655 [2024-07-15 16:29:32.978070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.655 [2024-07-15 16:29:33.010534] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:47.655 malloc0 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74485 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74485 /var/tmp/bdevperf.sock 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74485 ']' 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.655 16:29:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:47.655 [2024-07-15 16:29:33.114340] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:14:47.655 [2024-07-15 16:29:33.114467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74485 ] 00:14:47.912 [2024-07-15 16:29:33.252538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.912 [2024-07-15 16:29:33.386311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.912 [2024-07-15 16:29:33.448218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.845 16:29:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.845 16:29:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:48.845 16:29:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:48.845 [2024-07-15 16:29:34.327280] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:48.845 [2024-07-15 16:29:34.327482] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:48.845 TLSTESTn1 00:14:49.103 16:29:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.103 Running I/O for 10 seconds... 00:14:59.084 00:14:59.084 Latency(us) 00:14:59.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.084 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:59.084 Verification LBA range: start 0x0 length 0x2000 00:14:59.084 TLSTESTn1 : 10.02 3717.04 14.52 0.00 0.00 34372.21 6076.97 30265.72 00:14:59.084 =================================================================================================================== 00:14:59.084 Total : 3717.04 14.52 0.00 0.00 34372.21 6076.97 30265.72 00:14:59.084 0 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:59.084 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:59.084 nvmf_trace.0 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74485 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74485 ']' 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74485 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74485 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:59.343 killing process with pid 74485 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74485' 00:14:59.343 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.343 00:14:59.343 Latency(us) 00:14:59.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.343 =================================================================================================================== 00:14:59.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74485 00:14:59.343 [2024-07-15 16:29:44.703469] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:59.343 16:29:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74485 00:14:59.602 16:29:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:59.602 16:29:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.602 16:29:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:59.602 16:29:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.602 16:29:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:59.602 16:29:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.602 16:29:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.602 rmmod nvme_tcp 00:14:59.602 rmmod nvme_fabrics 00:14:59.602 rmmod nvme_keyring 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74450 ']' 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74450 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74450 ']' 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74450 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74450 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:59.602 killing process with pid 74450 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74450' 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74450 00:14:59.602 [2024-07-15 16:29:45.060283] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:59.602 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74450 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:59.861 00:14:59.861 real 0m14.462s 00:14:59.861 user 0m19.372s 00:14:59.861 sys 0m6.084s 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.861 16:29:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:59.861 ************************************ 00:14:59.861 END TEST nvmf_fips 00:14:59.861 ************************************ 00:14:59.861 16:29:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.861 16:29:45 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:59.861 16:29:45 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:59.861 16:29:45 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:59.861 16:29:45 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.861 16:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.120 16:29:45 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:00.120 16:29:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:00.120 16:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.120 16:29:45 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:00.120 16:29:45 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:00.120 16:29:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.120 16:29:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.120 16:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.120 ************************************ 00:15:00.120 START TEST nvmf_identify 00:15:00.120 ************************************ 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:00.120 * Looking for test storage... 00:15:00.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.120 16:29:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:00.121 Cannot find device "nvmf_tgt_br" 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.121 Cannot find device "nvmf_tgt_br2" 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:00.121 Cannot find device "nvmf_tgt_br" 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:00.121 Cannot find device "nvmf_tgt_br2" 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:00.121 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:00.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:00.381 00:15:00.381 --- 10.0.0.2 ping statistics --- 00:15:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.381 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:00.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:00.381 00:15:00.381 --- 10.0.0.3 ping statistics --- 00:15:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.381 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:00.381 00:15:00.381 --- 10.0.0.1 ping statistics --- 00:15:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.381 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74830 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74830 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74830 ']' 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.381 16:29:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.641 [2024-07-15 16:29:45.935296] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:00.641 [2024-07-15 16:29:45.935422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.641 [2024-07-15 16:29:46.077743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.900 [2024-07-15 16:29:46.214652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.900 [2024-07-15 16:29:46.214731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.900 [2024-07-15 16:29:46.214745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.900 [2024-07-15 16:29:46.214755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.900 [2024-07-15 16:29:46.214764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.900 [2024-07-15 16:29:46.215167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.900 [2024-07-15 16:29:46.215511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.900 [2024-07-15 16:29:46.215603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.900 [2024-07-15 16:29:46.215606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.900 [2024-07-15 16:29:46.273845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 [2024-07-15 16:29:47.026460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 Malloc0 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 [2024-07-15 16:29:47.146270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 [ 00:15:01.840 { 00:15:01.840 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.840 "subtype": "Discovery", 00:15:01.840 "listen_addresses": [ 00:15:01.840 { 00:15:01.840 "trtype": "TCP", 00:15:01.840 "adrfam": "IPv4", 00:15:01.840 "traddr": "10.0.0.2", 00:15:01.840 "trsvcid": "4420" 00:15:01.840 } 00:15:01.840 ], 00:15:01.840 "allow_any_host": true, 00:15:01.840 "hosts": [] 00:15:01.840 }, 00:15:01.840 { 00:15:01.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.840 "subtype": "NVMe", 00:15:01.840 "listen_addresses": [ 00:15:01.840 { 00:15:01.840 "trtype": "TCP", 00:15:01.840 "adrfam": "IPv4", 00:15:01.840 "traddr": "10.0.0.2", 00:15:01.840 "trsvcid": "4420" 00:15:01.840 } 00:15:01.840 ], 00:15:01.840 "allow_any_host": true, 00:15:01.840 "hosts": [], 00:15:01.840 "serial_number": "SPDK00000000000001", 00:15:01.840 "model_number": "SPDK bdev Controller", 00:15:01.840 "max_namespaces": 32, 00:15:01.840 "min_cntlid": 1, 00:15:01.840 "max_cntlid": 65519, 00:15:01.840 "namespaces": [ 00:15:01.840 { 00:15:01.840 "nsid": 1, 00:15:01.840 "bdev_name": "Malloc0", 00:15:01.840 "name": "Malloc0", 00:15:01.840 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:01.840 "eui64": "ABCDEF0123456789", 00:15:01.840 "uuid": "e3755bd4-e6f3-459f-a374-a5f6f4f4ddf3" 00:15:01.840 } 00:15:01.840 ] 00:15:01.840 } 00:15:01.840 ] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.840 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:01.840 [2024-07-15 16:29:47.198830] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:01.840 [2024-07-15 16:29:47.198915] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74865 ] 00:15:01.840 [2024-07-15 16:29:47.342822] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:01.840 [2024-07-15 16:29:47.342922] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:01.840 [2024-07-15 16:29:47.342930] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:01.840 [2024-07-15 16:29:47.342947] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:01.840 [2024-07-15 16:29:47.342972] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:01.840 [2024-07-15 16:29:47.343187] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:01.840 [2024-07-15 16:29:47.343268] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11422c0 0 00:15:01.840 [2024-07-15 16:29:47.348954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:01.840 [2024-07-15 16:29:47.348978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:01.841 [2024-07-15 16:29:47.348984] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:01.841 [2024-07-15 16:29:47.348988] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:01.841 [2024-07-15 16:29:47.349078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.349087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.349092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.349109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:01.841 [2024-07-15 16:29:47.349144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.356904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.356924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.356930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.356935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.841 [2024-07-15 16:29:47.356952] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:01.841 [2024-07-15 16:29:47.356960] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:01.841 [2024-07-15 16:29:47.356967] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:01.841 [2024-07-15 16:29:47.356986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.356991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.356995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.357012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.841 [2024-07-15 16:29:47.357058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.357139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.357147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.357151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.841 [2024-07-15 16:29:47.357162] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:01.841 [2024-07-15 16:29:47.357170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:01.841 [2024-07-15 16:29:47.357178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.357195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.841 [2024-07-15 16:29:47.357215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.357266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.357273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.357276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.841 [2024-07-15 16:29:47.357287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:01.841 [2024-07-15 16:29:47.357297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.841 [2024-07-15 16:29:47.357304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.357321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.841 [2024-07-15 16:29:47.357354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.357400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.357407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.357411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.841 [2024-07-15 16:29:47.357421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.841 [2024-07-15 16:29:47.357431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.357446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.841 [2024-07-15 16:29:47.357463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.357516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.357522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.357526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.841 [2024-07-15 16:29:47.357535] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:01.841 [2024-07-15 16:29:47.357540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:01.841 [2024-07-15 16:29:47.357549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.841 [2024-07-15 16:29:47.357655] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:01.841 [2024-07-15 16:29:47.357661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.841 [2024-07-15 16:29:47.357673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.357703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.841 [2024-07-15 16:29:47.357722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.357770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.357776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.357780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.841 [2024-07-15 16:29:47.357789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.841 [2024-07-15 16:29:47.357799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.357813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.841 [2024-07-15 16:29:47.357830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.357875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.357882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.357900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.841 [2024-07-15 16:29:47.357909] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.841 [2024-07-15 16:29:47.357917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:01.841 [2024-07-15 16:29:47.357925] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:01.841 [2024-07-15 16:29:47.357936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.841 [2024-07-15 16:29:47.357947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.357951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.841 [2024-07-15 16:29:47.357959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.841 [2024-07-15 16:29:47.357979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.841 [2024-07-15 16:29:47.358085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.841 [2024-07-15 16:29:47.358093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.841 [2024-07-15 16:29:47.358097] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.358101] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11422c0): datao=0, datal=4096, cccid=0 00:15:01.841 [2024-07-15 16:29:47.358106] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1183940) on tqpair(0x11422c0): expected_datao=0, payload_size=4096 00:15:01.841 [2024-07-15 16:29:47.358111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.358119] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.358124] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.358133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.841 [2024-07-15 16:29:47.358138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.841 [2024-07-15 16:29:47.358142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.841 [2024-07-15 16:29:47.358146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.842 [2024-07-15 16:29:47.358155] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:01.842 [2024-07-15 16:29:47.358160] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:01.842 [2024-07-15 16:29:47.358165] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:01.842 [2024-07-15 16:29:47.358170] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:01.842 [2024-07-15 16:29:47.358175] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:01.842 [2024-07-15 16:29:47.358180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:01.842 [2024-07-15 16:29:47.358189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.842 [2024-07-15 16:29:47.358196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:01.842 [2024-07-15 16:29:47.358230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.842 [2024-07-15 16:29:47.358285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.842 [2024-07-15 16:29:47.358292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.842 [2024-07-15 16:29:47.358295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.842 [2024-07-15 16:29:47.358308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.842 [2024-07-15 16:29:47.358328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.842 [2024-07-15 16:29:47.358347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.842 [2024-07-15 16:29:47.358365] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.842 [2024-07-15 16:29:47.358383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.842 [2024-07-15 16:29:47.358397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.842 [2024-07-15 16:29:47.358404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.842 [2024-07-15 16:29:47.358434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183940, cid 0, qid 0 00:15:01.842 [2024-07-15 16:29:47.358440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183ac0, cid 1, qid 0 00:15:01.842 [2024-07-15 16:29:47.358445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183c40, cid 2, qid 0 00:15:01.842 [2024-07-15 16:29:47.358450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.842 [2024-07-15 16:29:47.358454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183f40, cid 4, qid 0 00:15:01.842 [2024-07-15 16:29:47.358548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.842 [2024-07-15 16:29:47.358555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.842 [2024-07-15 16:29:47.358558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183f40) on tqpair=0x11422c0 00:15:01.842 [2024-07-15 16:29:47.358568] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:01.842 [2024-07-15 16:29:47.358578] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:01.842 [2024-07-15 16:29:47.358589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.842 [2024-07-15 16:29:47.358618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183f40, cid 4, qid 0 00:15:01.842 [2024-07-15 16:29:47.358686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.842 [2024-07-15 16:29:47.358693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.842 [2024-07-15 16:29:47.358697] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358700] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11422c0): datao=0, datal=4096, cccid=4 00:15:01.842 [2024-07-15 16:29:47.358705] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1183f40) on tqpair(0x11422c0): expected_datao=0, payload_size=4096 00:15:01.842 [2024-07-15 16:29:47.358710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358717] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358721] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.842 [2024-07-15 16:29:47.358735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.842 [2024-07-15 16:29:47.358739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183f40) on tqpair=0x11422c0 00:15:01.842 [2024-07-15 16:29:47.358756] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:01.842 [2024-07-15 16:29:47.358788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.842 [2024-07-15 16:29:47.358810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.842 [2024-07-15 16:29:47.358817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11422c0) 00:15:01.842 [2024-07-15 16:29:47.358823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.842 [2024-07-15 16:29:47.358847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183f40, cid 4, qid 0 00:15:01.842 [2024-07-15 16:29:47.358854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11840c0, cid 5, qid 0 00:15:01.842 [2024-07-15 16:29:47.358984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.843 [2024-07-15 16:29:47.358993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.843 [2024-07-15 16:29:47.358998] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359002] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11422c0): datao=0, datal=1024, cccid=4 00:15:01.843 [2024-07-15 16:29:47.359007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1183f40) on tqpair(0x11422c0): expected_datao=0, payload_size=1024 00:15:01.843 [2024-07-15 16:29:47.359011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359018] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359022] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.843 [2024-07-15 16:29:47.359033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.843 [2024-07-15 16:29:47.359037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11840c0) on tqpair=0x11422c0 00:15:01.843 [2024-07-15 16:29:47.359060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.843 [2024-07-15 16:29:47.359068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.843 [2024-07-15 16:29:47.359071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183f40) on tqpair=0x11422c0 00:15:01.843 [2024-07-15 16:29:47.359088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11422c0) 00:15:01.843 [2024-07-15 16:29:47.359100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.843 [2024-07-15 16:29:47.359124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183f40, cid 4, qid 0 00:15:01.843 [2024-07-15 16:29:47.359194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.843 [2024-07-15 16:29:47.359201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.843 [2024-07-15 16:29:47.359205] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359209] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11422c0): datao=0, datal=3072, cccid=4 00:15:01.843 [2024-07-15 16:29:47.359213] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1183f40) on tqpair(0x11422c0): expected_datao=0, payload_size=3072 00:15:01.843 [2024-07-15 16:29:47.359218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359228] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.843 [2024-07-15 16:29:47.359257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.843 [2024-07-15 16:29:47.359261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183f40) on tqpair=0x11422c0 00:15:01.843 [2024-07-15 16:29:47.359290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11422c0) 00:15:01.843 [2024-07-15 16:29:47.359302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.843 [2024-07-15 16:29:47.359324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183f40, cid 4, qid 0 00:15:01.843 [2024-07-15 16:29:47.359397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.843 [2024-07-15 16:29:47.359404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.843 [2024-07-15 16:29:47.359407] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359411] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11422c0): datao=0, datal=8, cccid=4 00:15:01.843 [2024-07-15 16:29:47.359416] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1183f40) on tqpair(0x11422c0): expected_datao=0, payload_size=8 00:15:01.843 [2024-07-15 16:29:47.359420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359427] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359430] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.843 [2024-07-15 16:29:47.359453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.843 [2024-07-15 16:29:47.359457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.843 [2024-07-15 16:29:47.359461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183f40) on tqpair=0x11422c0 00:15:01.843 ===================================================== 00:15:01.843 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:01.843 ===================================================== 00:15:01.843 Controller Capabilities/Features 00:15:01.843 ================================ 00:15:01.843 Vendor ID: 0000 00:15:01.843 Subsystem Vendor ID: 0000 00:15:01.843 Serial Number: .................... 00:15:01.843 Model Number: ........................................ 00:15:01.843 Firmware Version: 24.09 00:15:01.843 Recommended Arb Burst: 0 00:15:01.843 IEEE OUI Identifier: 00 00 00 00:15:01.843 Multi-path I/O 00:15:01.843 May have multiple subsystem ports: No 00:15:01.843 May have multiple controllers: No 00:15:01.843 Associated with SR-IOV VF: No 00:15:01.843 Max Data Transfer Size: 131072 00:15:01.843 Max Number of Namespaces: 0 00:15:01.843 Max Number of I/O Queues: 1024 00:15:01.843 NVMe Specification Version (VS): 1.3 00:15:01.843 NVMe Specification Version (Identify): 1.3 00:15:01.843 Maximum Queue Entries: 128 00:15:01.843 Contiguous Queues Required: Yes 00:15:01.843 Arbitration Mechanisms Supported 00:15:01.843 Weighted Round Robin: Not Supported 00:15:01.843 Vendor Specific: Not Supported 00:15:01.843 Reset Timeout: 15000 ms 00:15:01.843 Doorbell Stride: 4 bytes 00:15:01.843 NVM Subsystem Reset: Not Supported 00:15:01.843 Command Sets Supported 00:15:01.843 NVM Command Set: Supported 00:15:01.843 Boot Partition: Not Supported 00:15:01.843 Memory Page Size Minimum: 4096 bytes 00:15:01.843 Memory Page Size Maximum: 4096 bytes 00:15:01.843 Persistent Memory Region: Not Supported 00:15:01.843 Optional Asynchronous Events Supported 00:15:01.843 Namespace Attribute Notices: Not Supported 00:15:01.843 Firmware Activation Notices: Not Supported 00:15:01.843 ANA Change Notices: Not Supported 00:15:01.843 PLE Aggregate Log Change Notices: Not Supported 00:15:01.843 LBA Status Info Alert Notices: Not Supported 00:15:01.843 EGE Aggregate Log Change Notices: Not Supported 00:15:01.843 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.843 Zone Descriptor Change Notices: Not Supported 00:15:01.843 Discovery Log Change Notices: Supported 00:15:01.843 Controller Attributes 00:15:01.843 128-bit Host Identifier: Not Supported 00:15:01.843 Non-Operational Permissive Mode: Not Supported 00:15:01.843 NVM Sets: Not Supported 00:15:01.843 Read Recovery Levels: Not Supported 00:15:01.843 Endurance Groups: Not Supported 00:15:01.843 Predictable Latency Mode: Not Supported 00:15:01.843 Traffic Based Keep ALive: Not Supported 00:15:01.843 Namespace Granularity: Not Supported 00:15:01.843 SQ Associations: Not Supported 00:15:01.843 UUID List: Not Supported 00:15:01.843 Multi-Domain Subsystem: Not Supported 00:15:01.843 Fixed Capacity Management: Not Supported 00:15:01.843 Variable Capacity Management: Not Supported 00:15:01.843 Delete Endurance Group: Not Supported 00:15:01.843 Delete NVM Set: Not Supported 00:15:01.843 Extended LBA Formats Supported: Not Supported 00:15:01.843 Flexible Data Placement Supported: Not Supported 00:15:01.843 00:15:01.843 Controller Memory Buffer Support 00:15:01.843 ================================ 00:15:01.843 Supported: No 00:15:01.843 00:15:01.843 Persistent Memory Region Support 00:15:01.843 ================================ 00:15:01.843 Supported: No 00:15:01.843 00:15:01.843 Admin Command Set Attributes 00:15:01.843 ============================ 00:15:01.843 Security Send/Receive: Not Supported 00:15:01.843 Format NVM: Not Supported 00:15:01.843 Firmware Activate/Download: Not Supported 00:15:01.843 Namespace Management: Not Supported 00:15:01.843 Device Self-Test: Not Supported 00:15:01.843 Directives: Not Supported 00:15:01.843 NVMe-MI: Not Supported 00:15:01.843 Virtualization Management: Not Supported 00:15:01.843 Doorbell Buffer Config: Not Supported 00:15:01.843 Get LBA Status Capability: Not Supported 00:15:01.843 Command & Feature Lockdown Capability: Not Supported 00:15:01.843 Abort Command Limit: 1 00:15:01.843 Async Event Request Limit: 4 00:15:01.843 Number of Firmware Slots: N/A 00:15:01.843 Firmware Slot 1 Read-Only: N/A 00:15:01.843 Firmware Activation Without Reset: N/A 00:15:01.843 Multiple Update Detection Support: N/A 00:15:01.843 Firmware Update Granularity: No Information Provided 00:15:01.843 Per-Namespace SMART Log: No 00:15:01.843 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.843 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:01.843 Command Effects Log Page: Not Supported 00:15:01.843 Get Log Page Extended Data: Supported 00:15:01.843 Telemetry Log Pages: Not Supported 00:15:01.843 Persistent Event Log Pages: Not Supported 00:15:01.843 Supported Log Pages Log Page: May Support 00:15:01.843 Commands Supported & Effects Log Page: Not Supported 00:15:01.843 Feature Identifiers & Effects Log Page:May Support 00:15:01.844 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.844 Data Area 4 for Telemetry Log: Not Supported 00:15:01.844 Error Log Page Entries Supported: 128 00:15:01.844 Keep Alive: Not Supported 00:15:01.844 00:15:01.844 NVM Command Set Attributes 00:15:01.844 ========================== 00:15:01.844 Submission Queue Entry Size 00:15:01.844 Max: 1 00:15:01.844 Min: 1 00:15:01.844 Completion Queue Entry Size 00:15:01.844 Max: 1 00:15:01.844 Min: 1 00:15:01.844 Number of Namespaces: 0 00:15:01.844 Compare Command: Not Supported 00:15:01.844 Write Uncorrectable Command: Not Supported 00:15:01.844 Dataset Management Command: Not Supported 00:15:01.844 Write Zeroes Command: Not Supported 00:15:01.844 Set Features Save Field: Not Supported 00:15:01.844 Reservations: Not Supported 00:15:01.844 Timestamp: Not Supported 00:15:01.844 Copy: Not Supported 00:15:01.844 Volatile Write Cache: Not Present 00:15:01.844 Atomic Write Unit (Normal): 1 00:15:01.844 Atomic Write Unit (PFail): 1 00:15:01.844 Atomic Compare & Write Unit: 1 00:15:01.844 Fused Compare & Write: Supported 00:15:01.844 Scatter-Gather List 00:15:01.844 SGL Command Set: Supported 00:15:01.844 SGL Keyed: Supported 00:15:01.844 SGL Bit Bucket Descriptor: Not Supported 00:15:01.844 SGL Metadata Pointer: Not Supported 00:15:01.844 Oversized SGL: Not Supported 00:15:01.844 SGL Metadata Address: Not Supported 00:15:01.844 SGL Offset: Supported 00:15:01.844 Transport SGL Data Block: Not Supported 00:15:01.844 Replay Protected Memory Block: Not Supported 00:15:01.844 00:15:01.844 Firmware Slot Information 00:15:01.844 ========================= 00:15:01.844 Active slot: 0 00:15:01.844 00:15:01.844 00:15:01.844 Error Log 00:15:01.844 ========= 00:15:01.844 00:15:01.844 Active Namespaces 00:15:01.844 ================= 00:15:01.844 Discovery Log Page 00:15:01.844 ================== 00:15:01.844 Generation Counter: 2 00:15:01.844 Number of Records: 2 00:15:01.844 Record Format: 0 00:15:01.844 00:15:01.844 Discovery Log Entry 0 00:15:01.844 ---------------------- 00:15:01.844 Transport Type: 3 (TCP) 00:15:01.844 Address Family: 1 (IPv4) 00:15:01.844 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:01.844 Entry Flags: 00:15:01.844 Duplicate Returned Information: 1 00:15:01.844 Explicit Persistent Connection Support for Discovery: 1 00:15:01.844 Transport Requirements: 00:15:01.844 Secure Channel: Not Required 00:15:01.844 Port ID: 0 (0x0000) 00:15:01.844 Controller ID: 65535 (0xffff) 00:15:01.844 Admin Max SQ Size: 128 00:15:01.844 Transport Service Identifier: 4420 00:15:01.844 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:01.844 Transport Address: 10.0.0.2 00:15:01.844 Discovery Log Entry 1 00:15:01.844 ---------------------- 00:15:01.844 Transport Type: 3 (TCP) 00:15:01.844 Address Family: 1 (IPv4) 00:15:01.844 Subsystem Type: 2 (NVM Subsystem) 00:15:01.844 Entry Flags: 00:15:01.844 Duplicate Returned Information: 0 00:15:01.844 Explicit Persistent Connection Support for Discovery: 0 00:15:01.844 Transport Requirements: 00:15:01.844 Secure Channel: Not Required 00:15:01.844 Port ID: 0 (0x0000) 00:15:01.844 Controller ID: 65535 (0xffff) 00:15:01.844 Admin Max SQ Size: 128 00:15:01.844 Transport Service Identifier: 4420 00:15:01.844 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:01.844 Transport Address: 10.0.0.2 [2024-07-15 16:29:47.359582] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:01.844 [2024-07-15 16:29:47.359596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183940) on tqpair=0x11422c0 00:15:01.844 [2024-07-15 16:29:47.359604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.844 [2024-07-15 16:29:47.359610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183ac0) on tqpair=0x11422c0 00:15:01.844 [2024-07-15 16:29:47.359615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.844 [2024-07-15 16:29:47.359620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183c40) on tqpair=0x11422c0 00:15:01.844 [2024-07-15 16:29:47.359625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.844 [2024-07-15 16:29:47.359630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.844 [2024-07-15 16:29:47.359635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.844 [2024-07-15 16:29:47.359645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.844 [2024-07-15 16:29:47.359663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.844 [2024-07-15 16:29:47.359685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.844 [2024-07-15 16:29:47.359738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.844 [2024-07-15 16:29:47.359745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.844 [2024-07-15 16:29:47.359749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.844 [2024-07-15 16:29:47.359761] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.844 [2024-07-15 16:29:47.359776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.844 [2024-07-15 16:29:47.359798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.844 [2024-07-15 16:29:47.359885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.844 [2024-07-15 16:29:47.359892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.844 [2024-07-15 16:29:47.359896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.844 [2024-07-15 16:29:47.359905] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:01.844 [2024-07-15 16:29:47.359910] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:01.844 [2024-07-15 16:29:47.359935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.359959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.844 [2024-07-15 16:29:47.359966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.844 [2024-07-15 16:29:47.359985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.844 [2024-07-15 16:29:47.360036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.844 [2024-07-15 16:29:47.360042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.844 [2024-07-15 16:29:47.360046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.360050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.844 [2024-07-15 16:29:47.360061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.360065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.844 [2024-07-15 16:29:47.360069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.844 [2024-07-15 16:29:47.360076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.845 [2024-07-15 16:29:47.360092] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.845 [2024-07-15 16:29:47.360141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.845 [2024-07-15 16:29:47.360147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.845 [2024-07-15 16:29:47.360151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.845 [2024-07-15 16:29:47.360165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.845 [2024-07-15 16:29:47.360179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.845 [2024-07-15 16:29:47.360196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.845 [2024-07-15 16:29:47.360243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.845 [2024-07-15 16:29:47.360249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.845 [2024-07-15 16:29:47.360252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.845 [2024-07-15 16:29:47.360266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.845 [2024-07-15 16:29:47.360281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.845 [2024-07-15 16:29:47.360297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.845 [2024-07-15 16:29:47.360341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.845 [2024-07-15 16:29:47.360347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.845 [2024-07-15 16:29:47.360351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.845 [2024-07-15 16:29:47.360364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.845 [2024-07-15 16:29:47.360379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.845 [2024-07-15 16:29:47.360395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.845 [2024-07-15 16:29:47.360444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.845 [2024-07-15 16:29:47.360450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.845 [2024-07-15 16:29:47.360453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.845 [2024-07-15 16:29:47.360467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.845 [2024-07-15 16:29:47.360482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.845 [2024-07-15 16:29:47.360498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.845 [2024-07-15 16:29:47.360547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.845 [2024-07-15 16:29:47.360553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.845 [2024-07-15 16:29:47.360556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.845 [2024-07-15 16:29:47.360570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.845 [2024-07-15 16:29:47.360585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.845 [2024-07-15 16:29:47.360601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.845 [2024-07-15 16:29:47.360647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.845 [2024-07-15 16:29:47.360653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.845 [2024-07-15 16:29:47.360657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.845 [2024-07-15 16:29:47.360670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.845 [2024-07-15 16:29:47.360685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.845 [2024-07-15 16:29:47.360701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.845 [2024-07-15 16:29:47.360748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.845 [2024-07-15 16:29:47.360754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.845 [2024-07-15 16:29:47.360758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.845 [2024-07-15 16:29:47.360771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.845 [2024-07-15 16:29:47.360779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.845 [2024-07-15 16:29:47.360786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.846 [2024-07-15 16:29:47.360819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.846 [2024-07-15 16:29:47.360889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.846 [2024-07-15 16:29:47.360896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.846 [2024-07-15 16:29:47.367864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.846 [2024-07-15 16:29:47.367881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.846 [2024-07-15 16:29:47.367902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.846 [2024-07-15 16:29:47.367907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.846 [2024-07-15 16:29:47.367911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11422c0) 00:15:01.846 [2024-07-15 16:29:47.367920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.846 [2024-07-15 16:29:47.367944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1183dc0, cid 3, qid 0 00:15:01.846 [2024-07-15 16:29:47.368005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.846 [2024-07-15 16:29:47.368012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.846 [2024-07-15 16:29:47.368016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.846 [2024-07-15 16:29:47.368020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1183dc0) on tqpair=0x11422c0 00:15:01.846 [2024-07-15 16:29:47.368028] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:15:01.846 00:15:02.136 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:02.136 [2024-07-15 16:29:47.414648] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:02.136 [2024-07-15 16:29:47.414721] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74867 ] 00:15:02.136 [2024-07-15 16:29:47.559559] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:02.136 [2024-07-15 16:29:47.559676] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:02.136 [2024-07-15 16:29:47.559683] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:02.136 [2024-07-15 16:29:47.559716] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:02.136 [2024-07-15 16:29:47.559726] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:02.136 [2024-07-15 16:29:47.559990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:02.136 [2024-07-15 16:29:47.560079] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xed42c0 0 00:15:02.136 [2024-07-15 16:29:47.566911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:02.136 [2024-07-15 16:29:47.566932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:02.136 [2024-07-15 16:29:47.566938] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:02.136 [2024-07-15 16:29:47.566942] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:02.136 [2024-07-15 16:29:47.566993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.567000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.567004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.136 [2024-07-15 16:29:47.567020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:02.136 [2024-07-15 16:29:47.567050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.136 [2024-07-15 16:29:47.573935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.136 [2024-07-15 16:29:47.573969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.136 [2024-07-15 16:29:47.573974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.573979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.136 [2024-07-15 16:29:47.573991] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:02.136 [2024-07-15 16:29:47.573999] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:02.136 [2024-07-15 16:29:47.574006] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:02.136 [2024-07-15 16:29:47.574026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.136 [2024-07-15 16:29:47.574044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.136 [2024-07-15 16:29:47.574072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.136 [2024-07-15 16:29:47.574127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.136 [2024-07-15 16:29:47.574134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.136 [2024-07-15 16:29:47.574138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.136 [2024-07-15 16:29:47.574148] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:02.136 [2024-07-15 16:29:47.574156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:02.136 [2024-07-15 16:29:47.574163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.136 [2024-07-15 16:29:47.574179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.136 [2024-07-15 16:29:47.574197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.136 [2024-07-15 16:29:47.574292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.136 [2024-07-15 16:29:47.574298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.136 [2024-07-15 16:29:47.574302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.136 [2024-07-15 16:29:47.574311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:02.136 [2024-07-15 16:29:47.574320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:02.136 [2024-07-15 16:29:47.574327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.136 [2024-07-15 16:29:47.574358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.136 [2024-07-15 16:29:47.574377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.136 [2024-07-15 16:29:47.574422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.136 [2024-07-15 16:29:47.574429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.136 [2024-07-15 16:29:47.574433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.136 [2024-07-15 16:29:47.574442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:02.136 [2024-07-15 16:29:47.574453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.136 [2024-07-15 16:29:47.574457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.574468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.137 [2024-07-15 16:29:47.574485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.137 [2024-07-15 16:29:47.574531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.137 [2024-07-15 16:29:47.574538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.137 [2024-07-15 16:29:47.574541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.137 [2024-07-15 16:29:47.574550] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:02.137 [2024-07-15 16:29:47.574556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:02.137 [2024-07-15 16:29:47.574564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:02.137 [2024-07-15 16:29:47.574670] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:02.137 [2024-07-15 16:29:47.574675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:02.137 [2024-07-15 16:29:47.574685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.574700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.137 [2024-07-15 16:29:47.574719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.137 [2024-07-15 16:29:47.574770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.137 [2024-07-15 16:29:47.574777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.137 [2024-07-15 16:29:47.574780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.137 [2024-07-15 16:29:47.574789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:02.137 [2024-07-15 16:29:47.574799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.574814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.137 [2024-07-15 16:29:47.574831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.137 [2024-07-15 16:29:47.574876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.137 [2024-07-15 16:29:47.574883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.137 [2024-07-15 16:29:47.574887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.137 [2024-07-15 16:29:47.574895] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:02.137 [2024-07-15 16:29:47.574900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.574920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:02.137 [2024-07-15 16:29:47.574932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.574942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.574946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.574972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.137 [2024-07-15 16:29:47.574993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.137 [2024-07-15 16:29:47.575089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.137 [2024-07-15 16:29:47.575097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.137 [2024-07-15 16:29:47.575100] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575104] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=4096, cccid=0 00:15:02.137 [2024-07-15 16:29:47.575110] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf15940) on tqpair(0xed42c0): expected_datao=0, payload_size=4096 00:15:02.137 [2024-07-15 16:29:47.575115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575123] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575128] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.137 [2024-07-15 16:29:47.575143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.137 [2024-07-15 16:29:47.575146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.137 [2024-07-15 16:29:47.575159] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:02.137 [2024-07-15 16:29:47.575165] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:02.137 [2024-07-15 16:29:47.575170] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:02.137 [2024-07-15 16:29:47.575175] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:02.137 [2024-07-15 16:29:47.575181] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:02.137 [2024-07-15 16:29:47.575186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.575212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.575219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.575236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:02.137 [2024-07-15 16:29:47.575256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.137 [2024-07-15 16:29:47.575315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.137 [2024-07-15 16:29:47.575323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.137 [2024-07-15 16:29:47.575327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.137 [2024-07-15 16:29:47.575339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.575353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.137 [2024-07-15 16:29:47.575360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.575374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.137 [2024-07-15 16:29:47.575380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.575393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.137 [2024-07-15 16:29:47.575399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.575412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.137 [2024-07-15 16:29:47.575416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.575430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.575438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed42c0) 00:15:02.137 [2024-07-15 16:29:47.575450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.137 [2024-07-15 16:29:47.575470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15940, cid 0, qid 0 00:15:02.137 [2024-07-15 16:29:47.575477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15ac0, cid 1, qid 0 00:15:02.137 [2024-07-15 16:29:47.575482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15c40, cid 2, qid 0 00:15:02.137 [2024-07-15 16:29:47.575487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.137 [2024-07-15 16:29:47.575491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15f40, cid 4, qid 0 00:15:02.137 [2024-07-15 16:29:47.575576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.137 [2024-07-15 16:29:47.575583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.137 [2024-07-15 16:29:47.575587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15f40) on tqpair=0xed42c0 00:15:02.137 [2024-07-15 16:29:47.575597] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:02.137 [2024-07-15 16:29:47.575606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.575616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.575638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:02.137 [2024-07-15 16:29:47.575645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.137 [2024-07-15 16:29:47.575653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.575660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:02.138 [2024-07-15 16:29:47.575678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15f40, cid 4, qid 0 00:15:02.138 [2024-07-15 16:29:47.575731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.575738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.575741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.575745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15f40) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.575803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.575814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.575822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.575827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.575834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.138 [2024-07-15 16:29:47.575870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15f40, cid 4, qid 0 00:15:02.138 [2024-07-15 16:29:47.575970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.138 [2024-07-15 16:29:47.575979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.138 [2024-07-15 16:29:47.575983] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.575987] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=4096, cccid=4 00:15:02.138 [2024-07-15 16:29:47.575992] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf15f40) on tqpair(0xed42c0): expected_datao=0, payload_size=4096 00:15:02.138 [2024-07-15 16:29:47.575997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576004] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576008] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.576023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.576027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15f40) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.576048] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:02.138 [2024-07-15 16:29:47.576061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.576092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.138 [2024-07-15 16:29:47.576113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15f40, cid 4, qid 0 00:15:02.138 [2024-07-15 16:29:47.576192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.138 [2024-07-15 16:29:47.576214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.138 [2024-07-15 16:29:47.576218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576222] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=4096, cccid=4 00:15:02.138 [2024-07-15 16:29:47.576227] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf15f40) on tqpair(0xed42c0): expected_datao=0, payload_size=4096 00:15:02.138 [2024-07-15 16:29:47.576231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576238] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576242] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.576256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.576260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15f40) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.576314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.576358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.138 [2024-07-15 16:29:47.576378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15f40, cid 4, qid 0 00:15:02.138 [2024-07-15 16:29:47.576440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.138 [2024-07-15 16:29:47.576453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.138 [2024-07-15 16:29:47.576457] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576461] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=4096, cccid=4 00:15:02.138 [2024-07-15 16:29:47.576467] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf15f40) on tqpair(0xed42c0): expected_datao=0, payload_size=4096 00:15:02.138 [2024-07-15 16:29:47.576472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576479] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576483] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.576499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.576503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15f40) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.576517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576578] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:02.138 [2024-07-15 16:29:47.576583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:02.138 [2024-07-15 16:29:47.576589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:02.138 [2024-07-15 16:29:47.576609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.576622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.138 [2024-07-15 16:29:47.576629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.576660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.138 [2024-07-15 16:29:47.576699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15f40, cid 4, qid 0 00:15:02.138 [2024-07-15 16:29:47.576706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf160c0, cid 5, qid 0 00:15:02.138 [2024-07-15 16:29:47.576777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.576784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.576787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15f40) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.576798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.576803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.576807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf160c0) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.576820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.576830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.138 [2024-07-15 16:29:47.576848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf160c0, cid 5, qid 0 00:15:02.138 [2024-07-15 16:29:47.576929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.576937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.576941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf160c0) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.576988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.576993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.577001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.138 [2024-07-15 16:29:47.577040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf160c0, cid 5, qid 0 00:15:02.138 [2024-07-15 16:29:47.577109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.138 [2024-07-15 16:29:47.577117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.138 [2024-07-15 16:29:47.577120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.577125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf160c0) on tqpair=0xed42c0 00:15:02.138 [2024-07-15 16:29:47.577136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.138 [2024-07-15 16:29:47.577140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xed42c0) 00:15:02.138 [2024-07-15 16:29:47.577148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.138 [2024-07-15 16:29:47.577165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf160c0, cid 5, qid 0 00:15:02.138 [2024-07-15 16:29:47.577224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.139 [2024-07-15 16:29:47.577231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.139 [2024-07-15 16:29:47.577236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf160c0) on tqpair=0xed42c0 00:15:02.139 [2024-07-15 16:29:47.577260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xed42c0) 00:15:02.139 [2024-07-15 16:29:47.577273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.139 [2024-07-15 16:29:47.577281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xed42c0) 00:15:02.139 [2024-07-15 16:29:47.577292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.139 [2024-07-15 16:29:47.577300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xed42c0) 00:15:02.139 [2024-07-15 16:29:47.577311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.139 [2024-07-15 16:29:47.577324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xed42c0) 00:15:02.139 [2024-07-15 16:29:47.577335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.139 [2024-07-15 16:29:47.577369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf160c0, cid 5, qid 0 00:15:02.139 [2024-07-15 16:29:47.577377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15f40, cid 4, qid 0 00:15:02.139 [2024-07-15 16:29:47.577382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf16240, cid 6, qid 0 00:15:02.139 [2024-07-15 16:29:47.577387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf163c0, cid 7, qid 0 00:15:02.139 [2024-07-15 16:29:47.577540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.139 [2024-07-15 16:29:47.577547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.139 [2024-07-15 16:29:47.577551] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577555] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=8192, cccid=5 00:15:02.139 [2024-07-15 16:29:47.577560] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf160c0) on tqpair(0xed42c0): expected_datao=0, payload_size=8192 00:15:02.139 [2024-07-15 16:29:47.577565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577583] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577588] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.139 [2024-07-15 16:29:47.577600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.139 [2024-07-15 16:29:47.577619] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577623] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=512, cccid=4 00:15:02.139 [2024-07-15 16:29:47.577628] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf15f40) on tqpair(0xed42c0): expected_datao=0, payload_size=512 00:15:02.139 [2024-07-15 16:29:47.577632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577638] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577642] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.139 [2024-07-15 16:29:47.577653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.139 [2024-07-15 16:29:47.577656] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577660] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=512, cccid=6 00:15:02.139 [2024-07-15 16:29:47.577665] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf16240) on tqpair(0xed42c0): expected_datao=0, payload_size=512 00:15:02.139 [2024-07-15 16:29:47.577669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577691] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577695] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:02.139 [2024-07-15 16:29:47.577706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:02.139 [2024-07-15 16:29:47.577710] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577714] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xed42c0): datao=0, datal=4096, cccid=7 00:15:02.139 [2024-07-15 16:29:47.577718] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf163c0) on tqpair(0xed42c0): expected_datao=0, payload_size=4096 00:15:02.139 [2024-07-15 16:29:47.577723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577730] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577733] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.139 [2024-07-15 16:29:47.577748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.139 [2024-07-15 16:29:47.577751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf160c0) on tqpair=0xed42c0 00:15:02.139 [2024-07-15 16:29:47.577773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.139 [2024-07-15 16:29:47.577780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.139 [2024-07-15 16:29:47.577783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15f40) on tqpair=0xed42c0 00:15:02.139 [2024-07-15 16:29:47.577800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.139 [2024-07-15 16:29:47.577806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.139 [2024-07-15 16:29:47.577810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf16240) on tqpair=0xed42c0 00:15:02.139 [2024-07-15 16:29:47.577821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.139 [2024-07-15 16:29:47.577827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.139 [2024-07-15 16:29:47.577831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.139 [2024-07-15 16:29:47.577835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf163c0) on tqpair=0xed42c0 00:15:02.139 ===================================================== 00:15:02.139 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.139 ===================================================== 00:15:02.139 Controller Capabilities/Features 00:15:02.139 ================================ 00:15:02.139 Vendor ID: 8086 00:15:02.139 Subsystem Vendor ID: 8086 00:15:02.139 Serial Number: SPDK00000000000001 00:15:02.139 Model Number: SPDK bdev Controller 00:15:02.139 Firmware Version: 24.09 00:15:02.139 Recommended Arb Burst: 6 00:15:02.139 IEEE OUI Identifier: e4 d2 5c 00:15:02.139 Multi-path I/O 00:15:02.139 May have multiple subsystem ports: Yes 00:15:02.139 May have multiple controllers: Yes 00:15:02.139 Associated with SR-IOV VF: No 00:15:02.139 Max Data Transfer Size: 131072 00:15:02.139 Max Number of Namespaces: 32 00:15:02.139 Max Number of I/O Queues: 127 00:15:02.139 NVMe Specification Version (VS): 1.3 00:15:02.139 NVMe Specification Version (Identify): 1.3 00:15:02.139 Maximum Queue Entries: 128 00:15:02.139 Contiguous Queues Required: Yes 00:15:02.139 Arbitration Mechanisms Supported 00:15:02.139 Weighted Round Robin: Not Supported 00:15:02.139 Vendor Specific: Not Supported 00:15:02.139 Reset Timeout: 15000 ms 00:15:02.139 Doorbell Stride: 4 bytes 00:15:02.139 NVM Subsystem Reset: Not Supported 00:15:02.139 Command Sets Supported 00:15:02.139 NVM Command Set: Supported 00:15:02.139 Boot Partition: Not Supported 00:15:02.139 Memory Page Size Minimum: 4096 bytes 00:15:02.139 Memory Page Size Maximum: 4096 bytes 00:15:02.139 Persistent Memory Region: Not Supported 00:15:02.139 Optional Asynchronous Events Supported 00:15:02.139 Namespace Attribute Notices: Supported 00:15:02.139 Firmware Activation Notices: Not Supported 00:15:02.139 ANA Change Notices: Not Supported 00:15:02.139 PLE Aggregate Log Change Notices: Not Supported 00:15:02.139 LBA Status Info Alert Notices: Not Supported 00:15:02.139 EGE Aggregate Log Change Notices: Not Supported 00:15:02.139 Normal NVM Subsystem Shutdown event: Not Supported 00:15:02.139 Zone Descriptor Change Notices: Not Supported 00:15:02.139 Discovery Log Change Notices: Not Supported 00:15:02.139 Controller Attributes 00:15:02.139 128-bit Host Identifier: Supported 00:15:02.139 Non-Operational Permissive Mode: Not Supported 00:15:02.139 NVM Sets: Not Supported 00:15:02.139 Read Recovery Levels: Not Supported 00:15:02.139 Endurance Groups: Not Supported 00:15:02.139 Predictable Latency Mode: Not Supported 00:15:02.139 Traffic Based Keep ALive: Not Supported 00:15:02.139 Namespace Granularity: Not Supported 00:15:02.139 SQ Associations: Not Supported 00:15:02.139 UUID List: Not Supported 00:15:02.139 Multi-Domain Subsystem: Not Supported 00:15:02.139 Fixed Capacity Management: Not Supported 00:15:02.139 Variable Capacity Management: Not Supported 00:15:02.139 Delete Endurance Group: Not Supported 00:15:02.139 Delete NVM Set: Not Supported 00:15:02.139 Extended LBA Formats Supported: Not Supported 00:15:02.139 Flexible Data Placement Supported: Not Supported 00:15:02.139 00:15:02.139 Controller Memory Buffer Support 00:15:02.139 ================================ 00:15:02.139 Supported: No 00:15:02.139 00:15:02.139 Persistent Memory Region Support 00:15:02.139 ================================ 00:15:02.139 Supported: No 00:15:02.139 00:15:02.139 Admin Command Set Attributes 00:15:02.139 ============================ 00:15:02.139 Security Send/Receive: Not Supported 00:15:02.139 Format NVM: Not Supported 00:15:02.139 Firmware Activate/Download: Not Supported 00:15:02.139 Namespace Management: Not Supported 00:15:02.139 Device Self-Test: Not Supported 00:15:02.139 Directives: Not Supported 00:15:02.139 NVMe-MI: Not Supported 00:15:02.139 Virtualization Management: Not Supported 00:15:02.139 Doorbell Buffer Config: Not Supported 00:15:02.139 Get LBA Status Capability: Not Supported 00:15:02.139 Command & Feature Lockdown Capability: Not Supported 00:15:02.140 Abort Command Limit: 4 00:15:02.140 Async Event Request Limit: 4 00:15:02.140 Number of Firmware Slots: N/A 00:15:02.140 Firmware Slot 1 Read-Only: N/A 00:15:02.140 Firmware Activation Without Reset: N/A 00:15:02.140 Multiple Update Detection Support: N/A 00:15:02.140 Firmware Update Granularity: No Information Provided 00:15:02.140 Per-Namespace SMART Log: No 00:15:02.140 Asymmetric Namespace Access Log Page: Not Supported 00:15:02.140 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:02.140 Command Effects Log Page: Supported 00:15:02.140 Get Log Page Extended Data: Supported 00:15:02.140 Telemetry Log Pages: Not Supported 00:15:02.140 Persistent Event Log Pages: Not Supported 00:15:02.140 Supported Log Pages Log Page: May Support 00:15:02.140 Commands Supported & Effects Log Page: Not Supported 00:15:02.140 Feature Identifiers & Effects Log Page:May Support 00:15:02.140 NVMe-MI Commands & Effects Log Page: May Support 00:15:02.140 Data Area 4 for Telemetry Log: Not Supported 00:15:02.140 Error Log Page Entries Supported: 128 00:15:02.140 Keep Alive: Supported 00:15:02.140 Keep Alive Granularity: 10000 ms 00:15:02.140 00:15:02.140 NVM Command Set Attributes 00:15:02.140 ========================== 00:15:02.140 Submission Queue Entry Size 00:15:02.140 Max: 64 00:15:02.140 Min: 64 00:15:02.140 Completion Queue Entry Size 00:15:02.140 Max: 16 00:15:02.140 Min: 16 00:15:02.140 Number of Namespaces: 32 00:15:02.140 Compare Command: Supported 00:15:02.140 Write Uncorrectable Command: Not Supported 00:15:02.140 Dataset Management Command: Supported 00:15:02.140 Write Zeroes Command: Supported 00:15:02.140 Set Features Save Field: Not Supported 00:15:02.140 Reservations: Supported 00:15:02.140 Timestamp: Not Supported 00:15:02.140 Copy: Supported 00:15:02.140 Volatile Write Cache: Present 00:15:02.140 Atomic Write Unit (Normal): 1 00:15:02.140 Atomic Write Unit (PFail): 1 00:15:02.140 Atomic Compare & Write Unit: 1 00:15:02.140 Fused Compare & Write: Supported 00:15:02.140 Scatter-Gather List 00:15:02.140 SGL Command Set: Supported 00:15:02.140 SGL Keyed: Supported 00:15:02.140 SGL Bit Bucket Descriptor: Not Supported 00:15:02.140 SGL Metadata Pointer: Not Supported 00:15:02.140 Oversized SGL: Not Supported 00:15:02.140 SGL Metadata Address: Not Supported 00:15:02.140 SGL Offset: Supported 00:15:02.140 Transport SGL Data Block: Not Supported 00:15:02.140 Replay Protected Memory Block: Not Supported 00:15:02.140 00:15:02.140 Firmware Slot Information 00:15:02.140 ========================= 00:15:02.140 Active slot: 1 00:15:02.140 Slot 1 Firmware Revision: 24.09 00:15:02.140 00:15:02.140 00:15:02.140 Commands Supported and Effects 00:15:02.140 ============================== 00:15:02.140 Admin Commands 00:15:02.140 -------------- 00:15:02.140 Get Log Page (02h): Supported 00:15:02.140 Identify (06h): Supported 00:15:02.140 Abort (08h): Supported 00:15:02.140 Set Features (09h): Supported 00:15:02.140 Get Features (0Ah): Supported 00:15:02.140 Asynchronous Event Request (0Ch): Supported 00:15:02.140 Keep Alive (18h): Supported 00:15:02.140 I/O Commands 00:15:02.140 ------------ 00:15:02.140 Flush (00h): Supported LBA-Change 00:15:02.140 Write (01h): Supported LBA-Change 00:15:02.140 Read (02h): Supported 00:15:02.140 Compare (05h): Supported 00:15:02.140 Write Zeroes (08h): Supported LBA-Change 00:15:02.140 Dataset Management (09h): Supported LBA-Change 00:15:02.140 Copy (19h): Supported LBA-Change 00:15:02.140 00:15:02.140 Error Log 00:15:02.140 ========= 00:15:02.140 00:15:02.140 Arbitration 00:15:02.140 =========== 00:15:02.140 Arbitration Burst: 1 00:15:02.140 00:15:02.140 Power Management 00:15:02.140 ================ 00:15:02.140 Number of Power States: 1 00:15:02.140 Current Power State: Power State #0 00:15:02.140 Power State #0: 00:15:02.140 Max Power: 0.00 W 00:15:02.140 Non-Operational State: Operational 00:15:02.140 Entry Latency: Not Reported 00:15:02.140 Exit Latency: Not Reported 00:15:02.140 Relative Read Throughput: 0 00:15:02.140 Relative Read Latency: 0 00:15:02.140 Relative Write Throughput: 0 00:15:02.140 Relative Write Latency: 0 00:15:02.140 Idle Power: Not Reported 00:15:02.140 Active Power: Not Reported 00:15:02.140 Non-Operational Permissive Mode: Not Supported 00:15:02.140 00:15:02.140 Health Information 00:15:02.140 ================== 00:15:02.140 Critical Warnings: 00:15:02.140 Available Spare Space: OK 00:15:02.140 Temperature: OK 00:15:02.140 Device Reliability: OK 00:15:02.140 Read Only: No 00:15:02.140 Volatile Memory Backup: OK 00:15:02.140 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:02.140 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:02.140 Available Spare: 0% 00:15:02.140 Available Spare Threshold: 0% 00:15:02.140 Life Percentage Used:[2024-07-15 16:29:47.581225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.140 [2024-07-15 16:29:47.581237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xed42c0) 00:15:02.140 [2024-07-15 16:29:47.581544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.140 [2024-07-15 16:29:47.581579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf163c0, cid 7, qid 0 00:15:02.140 [2024-07-15 16:29:47.581662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.140 [2024-07-15 16:29:47.581669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.140 [2024-07-15 16:29:47.581689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.140 [2024-07-15 16:29:47.581693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf163c0) on tqpair=0xed42c0 00:15:02.140 [2024-07-15 16:29:47.581735] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:02.140 [2024-07-15 16:29:47.581747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15940) on tqpair=0xed42c0 00:15:02.140 [2024-07-15 16:29:47.581754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.140 [2024-07-15 16:29:47.581760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15ac0) on tqpair=0xed42c0 00:15:02.140 [2024-07-15 16:29:47.581765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.140 [2024-07-15 16:29:47.581770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15c40) on tqpair=0xed42c0 00:15:02.140 [2024-07-15 16:29:47.581774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.140 [2024-07-15 16:29:47.581779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.140 [2024-07-15 16:29:47.581784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.140 [2024-07-15 16:29:47.581794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.140 [2024-07-15 16:29:47.581798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.140 [2024-07-15 16:29:47.581802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.140 [2024-07-15 16:29:47.581809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.140 [2024-07-15 16:29:47.581833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.140 [2024-07-15 16:29:47.581880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.140 [2024-07-15 16:29:47.581887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.140 [2024-07-15 16:29:47.581892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.140 [2024-07-15 16:29:47.581896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.140 [2024-07-15 16:29:47.581918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.140 [2024-07-15 16:29:47.581924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.140 [2024-07-15 16:29:47.581928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.581935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.581976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582090] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:02.141 [2024-07-15 16:29:47.582095] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:02.141 [2024-07-15 16:29:47.582105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.582902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.582910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.582916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.582933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.582988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.582996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.582999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.583014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.583029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.583049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.583095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.583102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.583105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.583119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.583134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.583151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.583199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.583205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.583209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.583222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.583238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.583255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.583304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.583325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.583328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.141 [2024-07-15 16:29:47.583376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.141 [2024-07-15 16:29:47.583392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.141 [2024-07-15 16:29:47.583409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.141 [2024-07-15 16:29:47.583456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.141 [2024-07-15 16:29:47.583463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.141 [2024-07-15 16:29:47.583467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.141 [2024-07-15 16:29:47.583471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.583481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.583497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.583515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.583561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.583567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.583571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.583586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.583602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.583620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.583669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.583676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.583679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.583694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.583710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.583758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.583827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.583834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.583837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.583851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.583866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.583883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.583976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.583984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.583988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.583992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.584018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.584037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.584083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.584090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.584093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.584124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.584141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.584209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.584215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.584219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.584248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.584264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.584324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.584331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.584350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.584398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.584415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.584460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.584467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.584471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.584509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.584527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.584572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.584579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.584582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.584613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.584631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.584677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.584685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.584688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.584756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.584773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.584820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.584833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.584838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.584842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.584852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.587973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.587991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xed42c0) 00:15:02.142 [2024-07-15 16:29:47.588001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:02.142 [2024-07-15 16:29:47.588027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf15dc0, cid 3, qid 0 00:15:02.142 [2024-07-15 16:29:47.588086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:02.142 [2024-07-15 16:29:47.588093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:02.142 [2024-07-15 16:29:47.588096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:02.142 [2024-07-15 16:29:47.588101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf15dc0) on tqpair=0xed42c0 00:15:02.142 [2024-07-15 16:29:47.588109] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:15:02.142 0% 00:15:02.142 Data Units Read: 0 00:15:02.142 Data Units Written: 0 00:15:02.142 Host Read Commands: 0 00:15:02.142 Host Write Commands: 0 00:15:02.142 Controller Busy Time: 0 minutes 00:15:02.142 Power Cycles: 0 00:15:02.143 Power On Hours: 0 hours 00:15:02.143 Unsafe Shutdowns: 0 00:15:02.143 Unrecoverable Media Errors: 0 00:15:02.143 Lifetime Error Log Entries: 0 00:15:02.143 Warning Temperature Time: 0 minutes 00:15:02.143 Critical Temperature Time: 0 minutes 00:15:02.143 00:15:02.143 Number of Queues 00:15:02.143 ================ 00:15:02.143 Number of I/O Submission Queues: 127 00:15:02.143 Number of I/O Completion Queues: 127 00:15:02.143 00:15:02.143 Active Namespaces 00:15:02.143 ================= 00:15:02.143 Namespace ID:1 00:15:02.143 Error Recovery Timeout: Unlimited 00:15:02.143 Command Set Identifier: NVM (00h) 00:15:02.143 Deallocate: Supported 00:15:02.143 Deallocated/Unwritten Error: Not Supported 00:15:02.143 Deallocated Read Value: Unknown 00:15:02.143 Deallocate in Write Zeroes: Not Supported 00:15:02.143 Deallocated Guard Field: 0xFFFF 00:15:02.143 Flush: Supported 00:15:02.143 Reservation: Supported 00:15:02.143 Namespace Sharing Capabilities: Multiple Controllers 00:15:02.143 Size (in LBAs): 131072 (0GiB) 00:15:02.143 Capacity (in LBAs): 131072 (0GiB) 00:15:02.143 Utilization (in LBAs): 131072 (0GiB) 00:15:02.143 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:02.143 EUI64: ABCDEF0123456789 00:15:02.143 UUID: e3755bd4-e6f3-459f-a374-a5f6f4f4ddf3 00:15:02.143 Thin Provisioning: Not Supported 00:15:02.143 Per-NS Atomic Units: Yes 00:15:02.143 Atomic Boundary Size (Normal): 0 00:15:02.143 Atomic Boundary Size (PFail): 0 00:15:02.143 Atomic Boundary Offset: 0 00:15:02.143 Maximum Single Source Range Length: 65535 00:15:02.143 Maximum Copy Length: 65535 00:15:02.143 Maximum Source Range Count: 1 00:15:02.143 NGUID/EUI64 Never Reused: No 00:15:02.143 Namespace Write Protected: No 00:15:02.143 Number of LBA Formats: 1 00:15:02.143 Current LBA Format: LBA Format #00 00:15:02.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:02.143 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.143 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.143 rmmod nvme_tcp 00:15:02.402 rmmod nvme_fabrics 00:15:02.402 rmmod nvme_keyring 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74830 ']' 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74830 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74830 ']' 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74830 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74830 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:02.402 killing process with pid 74830 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74830' 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74830 00:15:02.402 16:29:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74830 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:02.662 00:15:02.662 real 0m2.730s 00:15:02.662 user 0m7.722s 00:15:02.662 sys 0m0.674s 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.662 ************************************ 00:15:02.662 END TEST nvmf_identify 00:15:02.662 ************************************ 00:15:02.662 16:29:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:02.662 16:29:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:02.662 16:29:48 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:02.662 16:29:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:02.662 16:29:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.662 16:29:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.921 ************************************ 00:15:02.921 START TEST nvmf_perf 00:15:02.921 ************************************ 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:02.921 * Looking for test storage... 00:15:02.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:02.921 Cannot find device "nvmf_tgt_br" 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.921 Cannot find device "nvmf_tgt_br2" 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:02.921 Cannot find device "nvmf_tgt_br" 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:02.921 Cannot find device "nvmf_tgt_br2" 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:02.921 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:03.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:15:03.180 00:15:03.180 --- 10.0.0.2 ping statistics --- 00:15:03.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.180 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:03.180 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:03.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:03.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:03.180 00:15:03.180 --- 10.0.0.3 ping statistics --- 00:15:03.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.180 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:03.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:03.181 00:15:03.181 --- 10.0.0.1 ping statistics --- 00:15:03.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.181 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75037 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75037 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75037 ']' 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.181 16:29:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:03.440 [2024-07-15 16:29:48.750441] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:03.440 [2024-07-15 16:29:48.750532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.440 [2024-07-15 16:29:48.892951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.698 [2024-07-15 16:29:49.041984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.698 [2024-07-15 16:29:49.042053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.698 [2024-07-15 16:29:49.042065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.698 [2024-07-15 16:29:49.042073] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.698 [2024-07-15 16:29:49.042081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.698 [2024-07-15 16:29:49.042192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.698 [2024-07-15 16:29:49.042494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.698 [2024-07-15 16:29:49.042975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.698 [2024-07-15 16:29:49.043109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.698 [2024-07-15 16:29:49.120709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:04.266 16:29:49 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:04.833 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:04.833 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:05.091 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:05.091 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:05.350 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:05.350 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:05.350 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:05.350 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:05.350 16:29:50 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:05.608 [2024-07-15 16:29:51.013485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.608 16:29:51 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.869 16:29:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:05.869 16:29:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.162 16:29:51 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:06.162 16:29:51 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:06.422 16:29:51 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.679 [2024-07-15 16:29:51.976180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.679 16:29:51 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.937 16:29:52 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:06.937 16:29:52 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:06.937 16:29:52 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:06.937 16:29:52 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:07.871 Initializing NVMe Controllers 00:15:07.871 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:07.871 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:07.871 Initialization complete. Launching workers. 00:15:07.871 ======================================================== 00:15:07.871 Latency(us) 00:15:07.871 Device Information : IOPS MiB/s Average min max 00:15:07.871 PCIE (0000:00:10.0) NSID 1 from core 0: 22145.51 86.51 1445.20 370.41 8294.61 00:15:07.871 ======================================================== 00:15:07.871 Total : 22145.51 86.51 1445.20 370.41 8294.61 00:15:07.871 00:15:07.871 16:29:53 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:09.243 Initializing NVMe Controllers 00:15:09.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:09.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:09.243 Initialization complete. Launching workers. 00:15:09.243 ======================================================== 00:15:09.243 Latency(us) 00:15:09.243 Device Information : IOPS MiB/s Average min max 00:15:09.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3007.00 11.75 332.29 112.94 7288.03 00:15:09.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8120.46 5066.12 14977.90 00:15:09.243 ======================================================== 00:15:09.243 Total : 3131.00 12.23 640.73 112.94 14977.90 00:15:09.243 00:15:09.500 16:29:54 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:10.885 Initializing NVMe Controllers 00:15:10.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:10.885 Initialization complete. Launching workers. 00:15:10.886 ======================================================== 00:15:10.886 Latency(us) 00:15:10.886 Device Information : IOPS MiB/s Average min max 00:15:10.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8269.21 32.30 3870.10 526.88 7878.31 00:15:10.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3908.21 15.27 8225.20 5587.83 15982.82 00:15:10.886 ======================================================== 00:15:10.886 Total : 12177.42 47.57 5267.82 526.88 15982.82 00:15:10.886 00:15:10.886 16:29:56 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:10.886 16:29:56 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:13.443 Initializing NVMe Controllers 00:15:13.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.444 Controller IO queue size 128, less than required. 00:15:13.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.444 Controller IO queue size 128, less than required. 00:15:13.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:13.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:13.444 Initialization complete. Launching workers. 00:15:13.444 ======================================================== 00:15:13.444 Latency(us) 00:15:13.444 Device Information : IOPS MiB/s Average min max 00:15:13.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1372.67 343.17 94718.93 49401.45 143221.16 00:15:13.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 637.35 159.34 212182.07 72701.79 321055.34 00:15:13.444 ======================================================== 00:15:13.444 Total : 2010.02 502.50 131964.77 49401.45 321055.34 00:15:13.444 00:15:13.444 16:29:58 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:13.703 Initializing NVMe Controllers 00:15:13.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.703 Controller IO queue size 128, less than required. 00:15:13.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.703 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:13.703 Controller IO queue size 128, less than required. 00:15:13.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.703 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:13.703 WARNING: Some requested NVMe devices were skipped 00:15:13.703 No valid NVMe controllers or AIO or URING devices found 00:15:13.703 16:29:59 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:16.235 Initializing NVMe Controllers 00:15:16.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.235 Controller IO queue size 128, less than required. 00:15:16.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.235 Controller IO queue size 128, less than required. 00:15:16.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:16.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:16.235 Initialization complete. Launching workers. 00:15:16.235 00:15:16.235 ==================== 00:15:16.235 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:16.235 TCP transport: 00:15:16.235 polls: 7807 00:15:16.235 idle_polls: 4955 00:15:16.235 sock_completions: 2852 00:15:16.235 nvme_completions: 5213 00:15:16.235 submitted_requests: 7820 00:15:16.235 queued_requests: 1 00:15:16.235 00:15:16.235 ==================== 00:15:16.235 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:16.235 TCP transport: 00:15:16.235 polls: 10805 00:15:16.235 idle_polls: 7461 00:15:16.235 sock_completions: 3344 00:15:16.235 nvme_completions: 5739 00:15:16.235 submitted_requests: 8664 00:15:16.235 queued_requests: 1 00:15:16.235 ======================================================== 00:15:16.235 Latency(us) 00:15:16.235 Device Information : IOPS MiB/s Average min max 00:15:16.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1302.35 325.59 99877.00 52463.31 159707.72 00:15:16.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1433.78 358.44 90164.48 50497.32 167524.88 00:15:16.235 ======================================================== 00:15:16.235 Total : 2736.13 684.03 94787.47 50497.32 167524.88 00:15:16.235 00:15:16.235 16:30:01 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:16.235 16:30:01 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.493 rmmod nvme_tcp 00:15:16.493 rmmod nvme_fabrics 00:15:16.493 rmmod nvme_keyring 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75037 ']' 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75037 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75037 ']' 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75037 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75037 00:15:16.493 killing process with pid 75037 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75037' 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75037 00:15:16.493 16:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75037 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:17.429 00:15:17.429 real 0m14.552s 00:15:17.429 user 0m53.127s 00:15:17.429 sys 0m4.063s 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.429 ************************************ 00:15:17.429 END TEST nvmf_perf 00:15:17.429 ************************************ 00:15:17.429 16:30:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:17.429 16:30:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.429 16:30:02 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:17.429 16:30:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.429 16:30:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.429 16:30:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.429 ************************************ 00:15:17.429 START TEST nvmf_fio_host 00:15:17.429 ************************************ 00:15:17.429 16:30:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:17.429 * Looking for test storage... 00:15:17.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:17.429 16:30:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.429 16:30:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.429 16:30:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.429 16:30:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:17.430 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:17.689 Cannot find device "nvmf_tgt_br" 00:15:17.689 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:17.689 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.689 Cannot find device "nvmf_tgt_br2" 00:15:17.689 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:17.689 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:17.689 16:30:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:17.689 Cannot find device "nvmf_tgt_br" 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:17.689 Cannot find device "nvmf_tgt_br2" 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:17.689 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:17.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:15:17.947 00:15:17.947 --- 10.0.0.2 ping statistics --- 00:15:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.947 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:17.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:17.947 00:15:17.947 --- 10.0.0.3 ping statistics --- 00:15:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.947 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:17.947 00:15:17.947 --- 10.0.0.1 ping statistics --- 00:15:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.947 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75441 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75441 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75441 ']' 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.947 16:30:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.947 [2024-07-15 16:30:03.380755] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:17.947 [2024-07-15 16:30:03.381530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.206 [2024-07-15 16:30:03.527091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.206 [2024-07-15 16:30:03.705383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.206 [2024-07-15 16:30:03.705466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.206 [2024-07-15 16:30:03.705482] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.206 [2024-07-15 16:30:03.705493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.206 [2024-07-15 16:30:03.705502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.206 [2024-07-15 16:30:03.705669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.206 [2024-07-15 16:30:03.706267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.206 [2024-07-15 16:30:03.706328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.206 [2024-07-15 16:30:03.706340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.465 [2024-07-15 16:30:03.783522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:19.033 16:30:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.033 16:30:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:19.033 16:30:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.291 [2024-07-15 16:30:04.595743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.291 16:30:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:19.291 16:30:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.291 16:30:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.291 16:30:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.549 Malloc1 00:15:19.549 16:30:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.806 16:30:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.062 16:30:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.321 [2024-07-15 16:30:05.767103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.321 16:30:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:20.580 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:20.838 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:20.838 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:20.838 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:20.838 16:30:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:20.838 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:20.838 fio-3.35 00:15:20.838 Starting 1 thread 00:15:23.374 00:15:23.374 test: (groupid=0, jobs=1): err= 0: pid=75524: Mon Jul 15 16:30:08 2024 00:15:23.374 read: IOPS=8050, BW=31.4MiB/s (33.0MB/s)(63.1MiB/2008msec) 00:15:23.374 slat (usec): min=2, max=356, avg= 2.62, stdev= 3.62 00:15:23.374 clat (usec): min=2618, max=13992, avg=8284.79, stdev=576.63 00:15:23.374 lat (usec): min=2677, max=13995, avg=8287.41, stdev=576.25 00:15:23.374 clat percentiles (usec): 00:15:23.374 | 1.00th=[ 6980], 5.00th=[ 7504], 10.00th=[ 7635], 20.00th=[ 7898], 00:15:23.374 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:15:23.374 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9110], 00:15:23.374 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[12649], 99.95th=[13566], 00:15:23.374 | 99.99th=[13960] 00:15:23.374 bw ( KiB/s): min=31184, max=32952, per=100.00%, avg=32214.00, stdev=747.66, samples=4 00:15:23.374 iops : min= 7796, max= 8238, avg=8053.50, stdev=186.91, samples=4 00:15:23.374 write: IOPS=8030, BW=31.4MiB/s (32.9MB/s)(63.0MiB/2008msec); 0 zone resets 00:15:23.374 slat (usec): min=2, max=248, avg= 2.75, stdev= 2.32 00:15:23.374 clat (usec): min=2430, max=13873, avg=7559.84, stdev=542.27 00:15:23.374 lat (usec): min=2486, max=13875, avg=7562.60, stdev=542.07 00:15:23.374 clat percentiles (usec): 00:15:23.374 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:15:23.374 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7635], 00:15:23.374 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8291], 00:15:23.374 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[12518], 99.95th=[12911], 00:15:23.374 | 99.99th=[13829] 00:15:23.374 bw ( KiB/s): min=31744, max=32384, per=99.97%, avg=32114.00, stdev=296.35, samples=4 00:15:23.374 iops : min= 7936, max= 8096, avg=8028.50, stdev=74.09, samples=4 00:15:23.374 lat (msec) : 4=0.09%, 10=99.64%, 20=0.27% 00:15:23.374 cpu : usr=67.86%, sys=23.77%, ctx=12, majf=0, minf=7 00:15:23.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:23.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.374 issued rwts: total=16165,16126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.374 00:15:23.374 Run status group 0 (all jobs): 00:15:23.374 READ: bw=31.4MiB/s (33.0MB/s), 31.4MiB/s-31.4MiB/s (33.0MB/s-33.0MB/s), io=63.1MiB (66.2MB), run=2008-2008msec 00:15:23.374 WRITE: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.0MiB (66.1MB), run=2008-2008msec 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:23.374 16:30:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:23.374 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:23.374 fio-3.35 00:15:23.374 Starting 1 thread 00:15:25.953 00:15:25.953 test: (groupid=0, jobs=1): err= 0: pid=75573: Mon Jul 15 16:30:11 2024 00:15:25.953 read: IOPS=7502, BW=117MiB/s (123MB/s)(236MiB/2011msec) 00:15:25.953 slat (usec): min=3, max=124, avg= 3.95, stdev= 1.79 00:15:25.953 clat (usec): min=2774, max=19768, avg=9530.03, stdev=2883.50 00:15:25.953 lat (usec): min=2778, max=19772, avg=9533.98, stdev=2883.53 00:15:25.953 clat percentiles (usec): 00:15:25.953 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 7046], 00:15:25.953 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9896], 00:15:25.953 | 70.00th=[10814], 80.00th=[11469], 90.00th=[13173], 95.00th=[14746], 00:15:25.953 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:15:25.953 | 99.99th=[19792] 00:15:25.953 bw ( KiB/s): min=55168, max=67200, per=50.79%, avg=60976.00, stdev=5532.55, samples=4 00:15:25.953 iops : min= 3448, max= 4200, avg=3811.00, stdev=345.78, samples=4 00:15:25.953 write: IOPS=4302, BW=67.2MiB/s (70.5MB/s)(125MiB/1853msec); 0 zone resets 00:15:25.953 slat (usec): min=36, max=357, avg=39.62, stdev= 7.77 00:15:25.953 clat (usec): min=4060, max=22696, avg=13160.52, stdev=2441.90 00:15:25.953 lat (usec): min=4097, max=22735, avg=13200.14, stdev=2442.02 00:15:25.953 clat percentiles (usec): 00:15:25.953 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:15:25.953 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12911], 60.00th=[13566], 00:15:25.953 | 70.00th=[14222], 80.00th=[15139], 90.00th=[16319], 95.00th=[17433], 00:15:25.953 | 99.00th=[19792], 99.50th=[20841], 99.90th=[21365], 99.95th=[21627], 00:15:25.953 | 99.99th=[22676] 00:15:25.953 bw ( KiB/s): min=56832, max=69856, per=92.20%, avg=63464.00, stdev=5996.70, samples=4 00:15:25.953 iops : min= 3552, max= 4366, avg=3966.50, stdev=374.79, samples=4 00:15:25.953 lat (msec) : 4=0.13%, 10=42.44%, 20=57.12%, 50=0.31% 00:15:25.953 cpu : usr=82.10%, sys=13.77%, ctx=5, majf=0, minf=8 00:15:25.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:25.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.953 issued rwts: total=15088,7972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.953 00:15:25.953 Run status group 0 (all jobs): 00:15:25.953 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=236MiB (247MB), run=2011-2011msec 00:15:25.953 WRITE: bw=67.2MiB/s (70.5MB/s), 67.2MiB/s-67.2MiB/s (70.5MB/s-70.5MB/s), io=125MiB (131MB), run=1853-1853msec 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.953 rmmod nvme_tcp 00:15:25.953 rmmod nvme_fabrics 00:15:25.953 rmmod nvme_keyring 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75441 ']' 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75441 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75441 ']' 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75441 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75441 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.953 killing process with pid 75441 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75441' 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75441 00:15:25.953 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75441 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:26.520 00:15:26.520 real 0m9.012s 00:15:26.520 user 0m36.501s 00:15:26.520 sys 0m2.344s 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.520 16:30:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.520 ************************************ 00:15:26.520 END TEST nvmf_fio_host 00:15:26.520 ************************************ 00:15:26.520 16:30:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:26.520 16:30:11 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:26.520 16:30:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:26.520 16:30:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.520 16:30:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.520 ************************************ 00:15:26.520 START TEST nvmf_failover 00:15:26.520 ************************************ 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:26.520 * Looking for test storage... 00:15:26.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.520 16:30:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:26.520 Cannot find device "nvmf_tgt_br" 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.520 Cannot find device "nvmf_tgt_br2" 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:26.520 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:26.520 Cannot find device "nvmf_tgt_br" 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:26.779 Cannot find device "nvmf_tgt_br2" 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:26.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:26.779 00:15:26.779 --- 10.0.0.2 ping statistics --- 00:15:26.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.779 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:26.779 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:27.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:15:27.038 00:15:27.038 --- 10.0.0.3 ping statistics --- 00:15:27.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.038 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:15:27.038 00:15:27.038 --- 10.0.0.1 ping statistics --- 00:15:27.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.038 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75782 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75782 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75782 ']' 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.038 16:30:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.038 [2024-07-15 16:30:12.410969] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:27.038 [2024-07-15 16:30:12.411053] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.038 [2024-07-15 16:30:12.547310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.296 [2024-07-15 16:30:12.666641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.296 [2024-07-15 16:30:12.666923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.296 [2024-07-15 16:30:12.667074] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.296 [2024-07-15 16:30:12.667289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.296 [2024-07-15 16:30:12.667327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.296 [2024-07-15 16:30:12.667601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.296 [2024-07-15 16:30:12.667676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.296 [2024-07-15 16:30:12.667681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.296 [2024-07-15 16:30:12.721912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:27.862 16:30:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.862 16:30:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:27.862 16:30:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.862 16:30:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.862 16:30:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:28.120 16:30:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.120 16:30:13 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:28.120 [2024-07-15 16:30:13.629874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.120 16:30:13 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:28.687 Malloc0 00:15:28.687 16:30:13 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:28.687 16:30:14 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:28.945 16:30:14 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.203 [2024-07-15 16:30:14.621584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.203 16:30:14 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:29.462 [2024-07-15 16:30:14.910032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:29.462 16:30:14 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:29.721 [2024-07-15 16:30:15.142330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75840 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75840 /var/tmp/bdevperf.sock 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75840 ']' 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.721 16:30:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 16:30:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.098 16:30:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:31.098 16:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:31.098 NVMe0n1 00:15:31.098 16:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:31.666 00:15:31.666 16:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75869 00:15:31.666 16:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.666 16:30:16 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:32.622 16:30:17 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.882 [2024-07-15 16:30:18.198648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.882 [2024-07-15 16:30:18.198731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.882 [2024-07-15 16:30:18.198744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.882 [2024-07-15 16:30:18.198754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.198993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.883 [2024-07-15 16:30:18.199458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 [2024-07-15 16:30:18.199737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440950 is same with the state(5) to be set 00:15:32.884 16:30:18 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:36.171 16:30:21 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:36.171 00:15:36.171 16:30:21 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:36.429 16:30:21 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:39.714 16:30:24 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.714 [2024-07-15 16:30:25.107218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.714 16:30:25 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:40.651 16:30:26 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:40.925 16:30:26 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75869 00:15:47.525 0 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75840 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75840 ']' 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75840 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75840 00:15:47.525 killing process with pid 75840 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75840' 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75840 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75840 00:15:47.525 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:47.525 [2024-07-15 16:30:15.203260] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:47.525 [2024-07-15 16:30:15.203367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75840 ] 00:15:47.525 [2024-07-15 16:30:15.341261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.525 [2024-07-15 16:30:15.512438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.525 [2024-07-15 16:30:15.591099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:47.525 Running I/O for 15 seconds... 00:15:47.525 [2024-07-15 16:30:18.199801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.199880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.199914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.199932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.199949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.199964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.199980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.199996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.200983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.200998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.525 [2024-07-15 16:30:18.201537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.525 [2024-07-15 16:30:18.201552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.201971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.201985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.202971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.202985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.526 [2024-07-15 16:30:18.203496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.526 [2024-07-15 16:30:18.203837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.526 [2024-07-15 16:30:18.203853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:18.203881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.203898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:18.203913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.203943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:18.203964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.203981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:18.203996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.204012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:18.204026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.204042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b7c0 is same with the state(5) to be set 00:15:47.527 [2024-07-15 16:30:18.204060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.527 [2024-07-15 16:30:18.204071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.527 [2024-07-15 16:30:18.204082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:15:47.527 [2024-07-15 16:30:18.204096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.204169] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa7b7c0 was disconnected and freed. reset controller. 00:15:47.527 [2024-07-15 16:30:18.204188] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:47.527 [2024-07-15 16:30:18.204247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.527 [2024-07-15 16:30:18.204267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.204283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.527 [2024-07-15 16:30:18.204297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.204312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.527 [2024-07-15 16:30:18.204326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.204341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.527 [2024-07-15 16:30:18.204356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:18.204369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:47.527 [2024-07-15 16:30:18.204425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2a570 (9): Bad file descriptor 00:15:47.527 [2024-07-15 16:30:18.208273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:47.527 [2024-07-15 16:30:18.249769] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:47.527 [2024-07-15 16:30:21.821273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.821790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.821823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.821879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.821915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.821947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.821964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.821979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.527 [2024-07-15 16:30:21.822928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.822976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.822990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.527 [2024-07-15 16:30:21.823007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.527 [2024-07-15 16:30:21.823022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.823750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.823972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.823988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.528 [2024-07-15 16:30:21.824364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.528 [2024-07-15 16:30:21.824842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacd30 is same with the state(5) to be set 00:15:47.528 [2024-07-15 16:30:21.824888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.824900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.824912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61464 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.824926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.824953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.824965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62072 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.824979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.824993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.825004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.825015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62080 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.825041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.825058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.825069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.825101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62088 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.825117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.825131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.825143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.825154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62096 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.825169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.825203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.825215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.825226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62104 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.825240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.825254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.825265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.825276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62112 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.825290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.825305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.825316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.825327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62120 len:8 PRP1 0x0 PRP2 0x0 00:15:47.528 [2024-07-15 16:30:21.825341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.528 [2024-07-15 16:30:21.825355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.528 [2024-07-15 16:30:21.825366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.528 [2024-07-15 16:30:21.825377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62128 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62136 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62144 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62152 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62160 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62168 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61472 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61480 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61488 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61496 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61504 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.825950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.825960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.825972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61512 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.825986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.826000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.826011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.826028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61520 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.826043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.826058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.529 [2024-07-15 16:30:21.826069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.529 [2024-07-15 16:30:21.826080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61528 len:8 PRP1 0x0 PRP2 0x0 00:15:47.529 [2024-07-15 16:30:21.826094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.826167] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaacd30 was disconnected and freed. reset controller. 00:15:47.529 [2024-07-15 16:30:21.826187] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:47.529 [2024-07-15 16:30:21.826260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.529 [2024-07-15 16:30:21.826281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.826304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.529 [2024-07-15 16:30:21.826319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.826334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.529 [2024-07-15 16:30:21.826348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.826363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.529 [2024-07-15 16:30:21.826377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:21.826391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:47.529 [2024-07-15 16:30:21.826437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2a570 (9): Bad file descriptor 00:15:47.529 [2024-07-15 16:30:21.830254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:47.529 [2024-07-15 16:30:21.867229] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:47.529 [2024-07-15 16:30:26.411501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.411845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.411892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.411922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.411952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.411982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.411998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.529 [2024-07-15 16:30:26.412839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.412898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.412988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.413008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.529 [2024-07-15 16:30:26.413033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.529 [2024-07-15 16:30:26.413051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.413676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.413983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.413997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.530 [2024-07-15 16:30:26.414423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.530 [2024-07-15 16:30:26.414891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabdd0 is same with the state(5) to be set 00:15:47.530 [2024-07-15 16:30:26.414923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.530 [2024-07-15 16:30:26.414934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.530 [2024-07-15 16:30:26.414952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130688 len:8 PRP1 0x0 PRP2 0x0 00:15:47.530 [2024-07-15 16:30:26.414967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.414982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.530 [2024-07-15 16:30:26.414992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.530 [2024-07-15 16:30:26.415002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131016 len:8 PRP1 0x0 PRP2 0x0 00:15:47.530 [2024-07-15 16:30:26.415015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.530 [2024-07-15 16:30:26.415029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.530 [2024-07-15 16:30:26.415039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131024 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131032 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131040 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131048 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131056 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131064 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.415954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.415965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.415978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.415992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.416002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.416013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.416026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.416050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.416061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.416074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.416098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.416109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.416132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.531 [2024-07-15 16:30:26.416156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.531 [2024-07-15 16:30:26.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:8 PRP1 0x0 PRP2 0x0 00:15:47.531 [2024-07-15 16:30:26.416188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416246] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaabdd0 was disconnected and freed. reset controller. 00:15:47.531 [2024-07-15 16:30:26.416265] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:47.531 [2024-07-15 16:30:26.416322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.531 [2024-07-15 16:30:26.416342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.531 [2024-07-15 16:30:26.416372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.531 [2024-07-15 16:30:26.416400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.531 [2024-07-15 16:30:26.416428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.531 [2024-07-15 16:30:26.416442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:47.531 [2024-07-15 16:30:26.416479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2a570 (9): Bad file descriptor 00:15:47.531 [2024-07-15 16:30:26.420284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:47.531 [2024-07-15 16:30:26.461605] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:47.531 00:15:47.531 Latency(us) 00:15:47.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.531 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:47.531 Verification LBA range: start 0x0 length 0x4000 00:15:47.531 NVMe0n1 : 15.01 8564.55 33.46 243.92 0.00 14498.01 651.64 18469.24 00:15:47.531 =================================================================================================================== 00:15:47.531 Total : 8564.55 33.46 243.92 0.00 14498.01 651.64 18469.24 00:15:47.531 Received shutdown signal, test time was about 15.000000 seconds 00:15:47.531 00:15:47.531 Latency(us) 00:15:47.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.531 =================================================================================================================== 00:15:47.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:47.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76043 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76043 /var/tmp/bdevperf.sock 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76043 ']' 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.531 16:30:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:48.147 16:30:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.147 16:30:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:48.147 16:30:33 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:48.147 [2024-07-15 16:30:33.618650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:48.147 16:30:33 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:48.404 [2024-07-15 16:30:33.923110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:48.404 16:30:33 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.968 NVMe0n1 00:15:48.968 16:30:34 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.226 00:15:49.226 16:30:34 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.484 00:15:49.484 16:30:34 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:49.484 16:30:34 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:49.741 16:30:35 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.999 16:30:35 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:53.345 16:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.345 16:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:53.345 16:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76120 00:15:53.345 16:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:53.345 16:30:38 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76120 00:15:54.721 0 00:15:54.721 16:30:39 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:54.721 [2024-07-15 16:30:32.400579] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:15:54.721 [2024-07-15 16:30:32.400720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76043 ] 00:15:54.721 [2024-07-15 16:30:32.537498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.722 [2024-07-15 16:30:32.658959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.722 [2024-07-15 16:30:32.713313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:54.722 [2024-07-15 16:30:35.462022] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:54.722 [2024-07-15 16:30:35.462140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.722 [2024-07-15 16:30:35.462165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.722 [2024-07-15 16:30:35.462200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.722 [2024-07-15 16:30:35.462219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.722 [2024-07-15 16:30:35.462234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.722 [2024-07-15 16:30:35.462247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.722 [2024-07-15 16:30:35.462262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.722 [2024-07-15 16:30:35.462275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.722 [2024-07-15 16:30:35.462289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:54.722 [2024-07-15 16:30:35.462343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:54.722 [2024-07-15 16:30:35.462375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb570 (9): Bad file descriptor 00:15:54.722 [2024-07-15 16:30:35.467360] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:54.722 Running I/O for 1 seconds... 00:15:54.722 00:15:54.722 Latency(us) 00:15:54.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.722 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:54.722 Verification LBA range: start 0x0 length 0x4000 00:15:54.722 NVMe0n1 : 1.02 6689.44 26.13 0.00 0.00 19057.31 2278.87 15966.95 00:15:54.722 =================================================================================================================== 00:15:54.722 Total : 6689.44 26.13 0.00 0.00 19057.31 2278.87 15966.95 00:15:54.722 16:30:39 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:54.722 16:30:39 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:54.722 16:30:40 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:54.979 16:30:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:54.979 16:30:40 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:55.237 16:30:40 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:55.495 16:30:40 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:58.776 16:30:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:58.776 16:30:43 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76043 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76043 ']' 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76043 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76043 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.776 killing process with pid 76043 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76043' 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76043 00:15:58.776 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76043 00:15:59.034 16:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:59.034 16:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.292 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.292 rmmod nvme_tcp 00:15:59.292 rmmod nvme_fabrics 00:15:59.292 rmmod nvme_keyring 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75782 ']' 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75782 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75782 ']' 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75782 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75782 00:15:59.551 killing process with pid 75782 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75782' 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75782 00:15:59.551 16:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75782 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:59.810 00:15:59.810 real 0m33.304s 00:15:59.810 user 2m9.315s 00:15:59.810 sys 0m5.561s 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.810 16:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:59.810 ************************************ 00:15:59.810 END TEST nvmf_failover 00:15:59.810 ************************************ 00:15:59.810 16:30:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:59.810 16:30:45 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:59.810 16:30:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:59.810 16:30:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.810 16:30:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:59.810 ************************************ 00:15:59.810 START TEST nvmf_host_discovery 00:15:59.810 ************************************ 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:59.810 * Looking for test storage... 00:15:59.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.810 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:00.069 Cannot find device "nvmf_tgt_br" 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.069 Cannot find device "nvmf_tgt_br2" 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:00.069 Cannot find device "nvmf_tgt_br" 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:00.069 Cannot find device "nvmf_tgt_br2" 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.069 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:00.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:16:00.327 00:16:00.327 --- 10.0.0.2 ping statistics --- 00:16:00.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.327 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:00.327 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:00.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:00.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:00.327 00:16:00.327 --- 10.0.0.3 ping statistics --- 00:16:00.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.328 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:00.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:00.328 00:16:00.328 --- 10.0.0.1 ping statistics --- 00:16:00.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.328 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76386 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76386 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76386 ']' 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.328 16:30:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.328 [2024-07-15 16:30:45.778708] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:00.328 [2024-07-15 16:30:45.778801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.586 [2024-07-15 16:30:45.920754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.586 [2024-07-15 16:30:46.051449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.586 [2024-07-15 16:30:46.051535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.586 [2024-07-15 16:30:46.051561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.586 [2024-07-15 16:30:46.051581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.586 [2024-07-15 16:30:46.051597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.586 [2024-07-15 16:30:46.051657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.586 [2024-07-15 16:30:46.108368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.520 [2024-07-15 16:30:46.792402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.520 [2024-07-15 16:30:46.804489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.520 null0 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.520 null1 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76420 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76420 /tmp/host.sock 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76420 ']' 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:01.520 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.520 16:30:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.520 [2024-07-15 16:30:46.893358] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:01.521 [2024-07-15 16:30:46.893473] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76420 ] 00:16:01.521 [2024-07-15 16:30:47.036966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.778 [2024-07-15 16:30:47.166846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.778 [2024-07-15 16:30:47.223711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:02.343 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.343 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:02.343 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.601 16:30:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.917 [2024-07-15 16:30:48.260848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.917 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:03.189 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:03.190 16:30:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:03.447 [2024-07-15 16:30:48.917614] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:03.447 [2024-07-15 16:30:48.917679] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:03.447 [2024-07-15 16:30:48.917702] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:03.447 [2024-07-15 16:30:48.923657] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:03.447 [2024-07-15 16:30:48.981147] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:03.447 [2024-07-15 16:30:48.981211] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:04.036 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.036 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:04.036 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:04.036 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:04.036 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:04.036 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:04.036 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.037 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:04.294 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.295 [2024-07-15 16:30:49.834203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:04.295 [2024-07-15 16:30:49.835044] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:04.295 [2024-07-15 16:30:49.835079] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:04.295 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:04.296 [2024-07-15 16:30:49.841038] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:04.296 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:04.296 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.296 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:04.296 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:04.296 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.296 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.553 [2024-07-15 16:30:49.901323] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:04.553 [2024-07-15 16:30:49.901353] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:04.553 [2024-07-15 16:30:49.901361] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.553 16:30:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.553 [2024-07-15 16:30:50.063322] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:04.553 [2024-07-15 16:30:50.063365] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:04.553 [2024-07-15 16:30:50.068125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.553 [2024-07-15 16:30:50.068161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.553 [2024-07-15 16:30:50.068175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.553 [2024-07-15 16:30:50.068186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.553 [2024-07-15 16:30:50.068196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.553 [2024-07-15 16:30:50.068206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.553 [2024-07-15 16:30:50.068217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.553 [2024-07-15 16:30:50.068226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.553 [2024-07-15 16:30:50.068236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97c600 is same with the state(5) to be set 00:16:04.553 [2024-07-15 16:30:50.069317] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:04.553 [2024-07-15 16:30:50.069348] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:04.553 [2024-07-15 16:30:50.069414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97c600 (9): Bad file descriptor 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:04.553 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:04.811 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:04.812 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.070 16:30:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.004 [2024-07-15 16:30:51.498396] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:06.004 [2024-07-15 16:30:51.498441] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:06.004 [2024-07-15 16:30:51.498462] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:06.004 [2024-07-15 16:30:51.504432] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:06.263 [2024-07-15 16:30:51.565234] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:06.263 [2024-07-15 16:30:51.565303] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.263 request: 00:16:06.263 { 00:16:06.263 "name": "nvme", 00:16:06.263 "trtype": "tcp", 00:16:06.263 "traddr": "10.0.0.2", 00:16:06.263 "adrfam": "ipv4", 00:16:06.263 "trsvcid": "8009", 00:16:06.263 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:06.263 "wait_for_attach": true, 00:16:06.263 "method": "bdev_nvme_start_discovery", 00:16:06.263 "req_id": 1 00:16:06.263 } 00:16:06.263 Got JSON-RPC error response 00:16:06.263 response: 00:16:06.263 { 00:16:06.263 "code": -17, 00:16:06.263 "message": "File exists" 00:16:06.263 } 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:06.263 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.264 request: 00:16:06.264 { 00:16:06.264 "name": "nvme_second", 00:16:06.264 "trtype": "tcp", 00:16:06.264 "traddr": "10.0.0.2", 00:16:06.264 "adrfam": "ipv4", 00:16:06.264 "trsvcid": "8009", 00:16:06.264 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:06.264 "wait_for_attach": true, 00:16:06.264 "method": "bdev_nvme_start_discovery", 00:16:06.264 "req_id": 1 00:16:06.264 } 00:16:06.264 Got JSON-RPC error response 00:16:06.264 response: 00:16:06.264 { 00:16:06.264 "code": -17, 00:16:06.264 "message": "File exists" 00:16:06.264 } 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:06.264 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.523 16:30:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.459 [2024-07-15 16:30:52.834018] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:07.459 [2024-07-15 16:30:52.834110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995f20 with addr=10.0.0.2, port=8010 00:16:07.459 [2024-07-15 16:30:52.834139] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:07.459 [2024-07-15 16:30:52.834152] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:07.459 [2024-07-15 16:30:52.834163] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:08.395 [2024-07-15 16:30:53.833994] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:08.395 [2024-07-15 16:30:53.834068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x995f20 with addr=10.0.0.2, port=8010 00:16:08.395 [2024-07-15 16:30:53.834105] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:08.395 [2024-07-15 16:30:53.834117] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:08.395 [2024-07-15 16:30:53.834127] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:09.330 [2024-07-15 16:30:54.833802] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:09.330 request: 00:16:09.330 { 00:16:09.330 "name": "nvme_second", 00:16:09.330 "trtype": "tcp", 00:16:09.330 "traddr": "10.0.0.2", 00:16:09.330 "adrfam": "ipv4", 00:16:09.330 "trsvcid": "8010", 00:16:09.330 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:09.330 "wait_for_attach": false, 00:16:09.330 "attach_timeout_ms": 3000, 00:16:09.330 "method": "bdev_nvme_start_discovery", 00:16:09.330 "req_id": 1 00:16:09.330 } 00:16:09.330 Got JSON-RPC error response 00:16:09.330 response: 00:16:09.330 { 00:16:09.330 "code": -110, 00:16:09.330 "message": "Connection timed out" 00:16:09.330 } 00:16:09.330 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:09.330 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:09.331 16:30:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76420 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.589 rmmod nvme_tcp 00:16:09.589 rmmod nvme_fabrics 00:16:09.589 rmmod nvme_keyring 00:16:09.589 16:30:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76386 ']' 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76386 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76386 ']' 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76386 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76386 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:09.589 killing process with pid 76386 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76386' 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76386 00:16:09.589 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76386 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:09.848 00:16:09.848 real 0m10.097s 00:16:09.848 user 0m19.386s 00:16:09.848 sys 0m2.001s 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.848 ************************************ 00:16:09.848 END TEST nvmf_host_discovery 00:16:09.848 ************************************ 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.848 16:30:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:09.848 16:30:55 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:09.848 16:30:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:09.848 16:30:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.848 16:30:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.848 ************************************ 00:16:09.848 START TEST nvmf_host_multipath_status 00:16:09.848 ************************************ 00:16:09.848 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:10.108 * Looking for test storage... 00:16:10.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:10.108 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:10.109 Cannot find device "nvmf_tgt_br" 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.109 Cannot find device "nvmf_tgt_br2" 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:10.109 Cannot find device "nvmf_tgt_br" 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:10.109 Cannot find device "nvmf_tgt_br2" 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:10.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:10.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:10.109 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:10.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:10.368 00:16:10.368 --- 10.0.0.2 ping statistics --- 00:16:10.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.368 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:10.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:10.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:10.368 00:16:10.368 --- 10.0.0.3 ping statistics --- 00:16:10.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.368 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:10.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:10.368 00:16:10.368 --- 10.0.0.1 ping statistics --- 00:16:10.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.368 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:10.368 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76871 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76871 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76871 ']' 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.369 16:30:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:10.627 [2024-07-15 16:30:55.919252] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:10.627 [2024-07-15 16:30:55.919341] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.627 [2024-07-15 16:30:56.056964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.886 [2024-07-15 16:30:56.210705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.886 [2024-07-15 16:30:56.211099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.886 [2024-07-15 16:30:56.211313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.886 [2024-07-15 16:30:56.211637] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.886 [2024-07-15 16:30:56.211848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.886 [2024-07-15 16:30:56.212146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.886 [2024-07-15 16:30:56.212169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.886 [2024-07-15 16:30:56.273297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:11.494 16:30:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.494 16:30:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:11.494 16:30:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:11.494 16:30:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:11.494 16:30:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:11.494 16:30:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.494 16:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76871 00:16:11.494 16:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.753 [2024-07-15 16:30:57.263294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.753 16:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:12.320 Malloc0 00:16:12.320 16:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:12.320 16:30:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.579 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.837 [2024-07-15 16:30:58.322304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.837 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:13.095 [2024-07-15 16:30:58.550593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:13.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76927 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76927 /var/tmp/bdevperf.sock 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76927 ']' 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.095 16:30:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:14.032 16:30:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.032 16:30:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:14.032 16:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:14.289 16:30:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:14.856 Nvme0n1 00:16:14.856 16:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:15.115 Nvme0n1 00:16:15.115 16:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:15.115 16:31:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:17.067 16:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:17.067 16:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:17.325 16:31:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:17.583 16:31:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:18.522 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:18.522 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:18.522 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.522 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:18.780 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.780 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:18.780 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.780 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:19.039 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:19.039 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:19.039 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.039 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:19.296 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.296 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:19.296 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.296 16:31:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:19.554 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.554 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:19.554 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.554 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:19.812 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.812 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:19.812 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:19.812 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.070 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.070 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:20.070 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:20.328 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:20.586 16:31:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:21.519 16:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:21.519 16:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:21.519 16:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.519 16:31:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:21.777 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:21.777 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:21.777 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.777 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:22.034 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.034 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:22.034 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:22.034 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.600 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.600 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:22.600 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.600 16:31:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:22.600 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.600 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:22.600 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.600 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:22.858 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.858 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:22.858 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.858 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:23.116 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.116 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:23.116 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:23.373 16:31:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:23.630 16:31:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:24.998 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:24.999 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:24.999 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.999 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:24.999 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.999 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:24.999 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.999 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:25.277 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:25.277 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:25.277 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.277 16:31:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:25.536 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.536 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:25.536 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:25.536 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.795 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.795 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:25.795 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:25.795 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.053 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.053 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:26.053 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.053 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:26.311 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.311 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:26.311 16:31:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:26.569 16:31:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:26.827 16:31:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:27.810 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:27.810 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:27.810 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:27.810 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.069 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.069 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:28.069 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.069 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:28.636 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:28.636 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:28.636 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.636 16:31:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:28.636 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.636 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:28.636 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:28.636 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.895 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.895 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:28.895 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.895 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:29.154 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.154 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:29.154 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.154 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:29.411 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.411 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:29.411 16:31:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:29.670 16:31:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:29.929 16:31:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:30.888 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:30.888 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:30.888 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.888 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:31.147 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.147 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:31.147 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:31.147 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.715 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.715 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:31.715 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.715 16:31:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:31.974 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.974 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:31.974 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:31.974 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.233 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.233 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:32.233 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.233 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:32.492 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.492 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:32.492 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:32.492 16:31:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.751 16:31:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.751 16:31:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:32.752 16:31:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:33.010 16:31:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:33.269 16:31:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:34.205 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:34.205 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:34.205 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.205 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:34.464 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.464 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:34.464 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.464 16:31:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:34.722 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.722 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:34.722 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.722 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:34.981 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.981 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:34.981 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.981 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:35.240 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.240 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:35.240 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.240 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:35.500 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:35.500 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:35.500 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.500 16:31:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:35.758 16:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.758 16:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:36.016 16:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:36.016 16:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:36.275 16:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:36.560 16:31:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:37.500 16:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:37.500 16:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:37.500 16:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.500 16:31:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:37.758 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.758 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:37.758 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:37.758 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.015 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.015 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:38.015 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.015 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:38.273 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.273 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:38.273 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.273 16:31:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:38.531 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.531 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:38.531 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.531 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:38.788 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.788 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:38.788 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.788 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:39.046 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.046 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:39.046 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:39.304 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:39.562 16:31:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:40.497 16:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:40.497 16:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:40.497 16:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.497 16:31:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:40.756 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:40.756 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:40.756 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.756 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.014 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.014 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:41.014 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.014 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:41.270 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.270 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:41.270 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.270 16:31:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:41.528 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.528 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:41.528 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.528 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.093 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.093 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:42.093 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.093 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:42.093 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.093 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:42.093 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:42.658 16:31:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:42.658 16:31:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.031 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:44.290 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.290 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:44.290 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.290 16:31:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:44.549 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.549 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:44.549 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.549 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.808 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.808 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:44.808 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.808 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:45.386 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.386 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:45.386 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.386 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:45.386 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.386 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:45.386 16:31:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:45.951 16:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:45.951 16:31:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:47.329 16:31:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.588 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.588 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:47.589 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.589 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:47.848 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.848 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:47.848 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.848 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:48.107 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.107 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:48.107 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.107 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:48.365 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.366 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:48.366 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:48.366 16:31:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76927 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76927 ']' 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76927 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76927 00:16:48.624 killing process with pid 76927 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76927' 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76927 00:16:48.624 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76927 00:16:48.885 Connection closed with partial response: 00:16:48.885 00:16:48.885 00:16:48.885 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76927 00:16:48.885 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:48.885 [2024-07-15 16:30:58.618709] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:48.885 [2024-07-15 16:30:58.618827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76927 ] 00:16:48.885 [2024-07-15 16:30:58.753505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.885 [2024-07-15 16:30:58.877046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.885 [2024-07-15 16:30:58.929935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:48.885 Running I/O for 90 seconds... 00:16:48.885 [2024-07-15 16:31:15.157352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.157826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.157880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.157921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.157943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.157986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.158026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.158063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.158100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.158139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.158178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.886 [2024-07-15 16:31:15.158814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.886 [2024-07-15 16:31:15.158851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:48.886 [2024-07-15 16:31:15.158899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.158916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.158938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.158953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.158983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.159772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.159817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.159866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.159908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.159959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.159982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.159999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.887 [2024-07-15 16:31:15.160357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.160394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.160440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.160481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.887 [2024-07-15 16:31:15.160518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:48.887 [2024-07-15 16:31:15.160540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.160555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.160612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.160650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.160694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.160732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.160770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.160808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.160847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.160900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.160952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.160975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.160992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.888 [2024-07-15 16:31:15.161435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.888 [2024-07-15 16:31:15.161962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:48.888 [2024-07-15 16:31:15.161991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.162008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.162061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.162099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.162137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.162185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.162222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.162260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.162297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.162345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.162382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.162430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.162453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.162470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:15.163192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:15.163605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:15.163626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.477822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.477946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.478039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.478269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.478307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.478454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.478490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.478688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.889 [2024-07-15 16:31:31.478725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.889 [2024-07-15 16:31:31.478801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:48.889 [2024-07-15 16:31:31.478823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.478838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.478873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.478892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.478913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.478929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.478951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.478967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.478989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.890 [2024-07-15 16:31:31.479653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:48.890 [2024-07-15 16:31:31.479712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.890 [2024-07-15 16:31:31.479727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.479749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.479765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.479810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.479831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.479868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.479887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.479909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.479925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.479948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.479963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.481363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.481402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.481452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.481747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.481784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.481821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.891 [2024-07-15 16:31:31.481873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:48.891 [2024-07-15 16:31:31.481946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.891 [2024-07-15 16:31:31.481963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:48.891 Received shutdown signal, test time was about 33.414600 seconds 00:16:48.891 00:16:48.891 Latency(us) 00:16:48.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.891 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:48.891 Verification LBA range: start 0x0 length 0x4000 00:16:48.891 Nvme0n1 : 33.41 8357.31 32.65 0.00 0.00 15282.79 636.74 4026531.84 00:16:48.891 =================================================================================================================== 00:16:48.891 Total : 8357.31 32.65 0.00 0.00 15282.79 636.74 4026531.84 00:16:48.891 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.150 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:49.150 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:49.150 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:49.150 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.150 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.414 rmmod nvme_tcp 00:16:49.414 rmmod nvme_fabrics 00:16:49.414 rmmod nvme_keyring 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76871 ']' 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76871 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76871 ']' 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76871 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76871 00:16:49.414 killing process with pid 76871 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76871' 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76871 00:16:49.414 16:31:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76871 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:49.674 00:16:49.674 real 0m39.688s 00:16:49.674 user 2m7.258s 00:16:49.674 sys 0m12.426s 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.674 16:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 ************************************ 00:16:49.674 END TEST nvmf_host_multipath_status 00:16:49.674 ************************************ 00:16:49.674 16:31:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:49.674 16:31:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:49.674 16:31:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.674 16:31:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.674 16:31:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.674 ************************************ 00:16:49.674 START TEST nvmf_discovery_remove_ifc 00:16:49.674 ************************************ 00:16:49.674 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:49.674 * Looking for test storage... 00:16:49.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:49.674 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.674 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.933 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:49.934 Cannot find device "nvmf_tgt_br" 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.934 Cannot find device "nvmf_tgt_br2" 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:49.934 Cannot find device "nvmf_tgt_br" 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:49.934 Cannot find device "nvmf_tgt_br2" 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:49.934 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:50.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:16:50.194 00:16:50.194 --- 10.0.0.2 ping statistics --- 00:16:50.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.194 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:50.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:16:50.194 00:16:50.194 --- 10.0.0.3 ping statistics --- 00:16:50.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.194 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:50.194 00:16:50.194 --- 10.0.0.1 ping statistics --- 00:16:50.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.194 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77718 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77718 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77718 ']' 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.194 16:31:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.194 [2024-07-15 16:31:35.665835] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:50.194 [2024-07-15 16:31:35.665962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.452 [2024-07-15 16:31:35.798764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.452 [2024-07-15 16:31:35.914939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.452 [2024-07-15 16:31:35.914987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.452 [2024-07-15 16:31:35.915014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.452 [2024-07-15 16:31:35.915023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.452 [2024-07-15 16:31:35.915030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.452 [2024-07-15 16:31:35.915054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.452 [2024-07-15 16:31:35.968199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.386 [2024-07-15 16:31:36.711006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.386 [2024-07-15 16:31:36.719119] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:51.386 null0 00:16:51.386 [2024-07-15 16:31:36.751068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77750 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77750 /tmp/host.sock 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77750 ']' 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.386 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.386 16:31:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.386 [2024-07-15 16:31:36.821616] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:16:51.386 [2024-07-15 16:31:36.821706] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77750 ] 00:16:51.645 [2024-07-15 16:31:36.971674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.645 [2024-07-15 16:31:37.118785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.639 16:31:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.639 [2024-07-15 16:31:37.959104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:52.639 16:31:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.639 16:31:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:52.639 16:31:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.639 16:31:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.573 [2024-07-15 16:31:39.024324] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:53.573 [2024-07-15 16:31:39.024389] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:53.573 [2024-07-15 16:31:39.024410] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:53.573 [2024-07-15 16:31:39.030374] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:53.573 [2024-07-15 16:31:39.088197] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:53.573 [2024-07-15 16:31:39.088319] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:53.573 [2024-07-15 16:31:39.088356] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:53.573 [2024-07-15 16:31:39.088381] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:53.573 [2024-07-15 16:31:39.088415] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.573 [2024-07-15 16:31:39.092949] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2093de0 was disconnected and freed. delete nvme_qpair. 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.573 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.832 16:31:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.763 16:31:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.137 16:31:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.071 16:31:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:58.039 16:31:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.972 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.972 [2024-07-15 16:31:44.515319] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:58.972 [2024-07-15 16:31:44.515435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.972 [2024-07-15 16:31:44.515453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.972 [2024-07-15 16:31:44.515469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.972 [2024-07-15 16:31:44.515480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.972 [2024-07-15 16:31:44.515492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.972 [2024-07-15 16:31:44.515501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.972 [2024-07-15 16:31:44.515512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.972 [2024-07-15 16:31:44.515521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.972 [2024-07-15 16:31:44.515533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.972 [2024-07-15 16:31:44.515543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.973 [2024-07-15 16:31:44.515553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff9ac0 is same with the state(5) to be set 00:16:58.973 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:58.973 16:31:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.231 [2024-07-15 16:31:44.525303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff9ac0 (9): Bad file descriptor 00:16:59.231 [2024-07-15 16:31:44.535337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.166 [2024-07-15 16:31:45.599054] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:00.166 [2024-07-15 16:31:45.599255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff9ac0 with addr=10.0.0.2, port=4420 00:17:00.166 [2024-07-15 16:31:45.599314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff9ac0 is same with the state(5) to be set 00:17:00.166 [2024-07-15 16:31:45.599420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff9ac0 (9): Bad file descriptor 00:17:00.166 [2024-07-15 16:31:45.600228] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:00.166 [2024-07-15 16:31:45.600285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:00.166 [2024-07-15 16:31:45.600305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:00.166 [2024-07-15 16:31:45.600324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:00.166 [2024-07-15 16:31:45.600365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:00.166 [2024-07-15 16:31:45.600386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:00.166 16:31:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:01.096 [2024-07-15 16:31:46.600468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:01.096 [2024-07-15 16:31:46.600590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:01.097 [2024-07-15 16:31:46.600613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:01.097 [2024-07-15 16:31:46.600625] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:01.097 [2024-07-15 16:31:46.600670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.097 [2024-07-15 16:31:46.600713] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:01.097 [2024-07-15 16:31:46.600797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.097 [2024-07-15 16:31:46.600816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.097 [2024-07-15 16:31:46.600833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.097 [2024-07-15 16:31:46.600854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.097 [2024-07-15 16:31:46.600886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.097 [2024-07-15 16:31:46.600898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.097 [2024-07-15 16:31:46.600909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.097 [2024-07-15 16:31:46.600919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.097 [2024-07-15 16:31:46.600931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.097 [2024-07-15 16:31:46.600941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.097 [2024-07-15 16:31:46.600952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:01.097 [2024-07-15 16:31:46.601466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffd860 (9): Bad file descriptor 00:17:01.097 [2024-07-15 16:31:46.602477] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:01.097 [2024-07-15 16:31:46.602519] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.097 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:01.354 16:31:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.286 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:02.287 16:31:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:03.220 [2024-07-15 16:31:48.609208] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:03.220 [2024-07-15 16:31:48.609269] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:03.220 [2024-07-15 16:31:48.609291] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:03.220 [2024-07-15 16:31:48.615249] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:03.220 [2024-07-15 16:31:48.672312] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:03.220 [2024-07-15 16:31:48.672416] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:03.220 [2024-07-15 16:31:48.672448] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:03.220 [2024-07-15 16:31:48.672472] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:03.220 [2024-07-15 16:31:48.672484] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:03.220 [2024-07-15 16:31:48.677821] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20a0d90 was disconnected and freed. delete nvme_qpair. 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77750 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77750 ']' 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77750 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77750 00:17:03.479 killing process with pid 77750 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77750' 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77750 00:17:03.479 16:31:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77750 00:17:03.737 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:03.737 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.737 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:03.737 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.737 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:03.737 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.737 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.737 rmmod nvme_tcp 00:17:03.997 rmmod nvme_fabrics 00:17:03.997 rmmod nvme_keyring 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77718 ']' 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77718 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77718 ']' 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77718 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77718 00:17:03.997 killing process with pid 77718 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77718' 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77718 00:17:03.997 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77718 00:17:04.256 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:04.257 00:17:04.257 real 0m14.485s 00:17:04.257 user 0m25.129s 00:17:04.257 sys 0m2.516s 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.257 16:31:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.257 ************************************ 00:17:04.257 END TEST nvmf_discovery_remove_ifc 00:17:04.257 ************************************ 00:17:04.257 16:31:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:04.257 16:31:49 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:04.257 16:31:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:04.257 16:31:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.257 16:31:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.257 ************************************ 00:17:04.257 START TEST nvmf_identify_kernel_target 00:17:04.257 ************************************ 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:04.257 * Looking for test storage... 00:17:04.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.257 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:04.517 Cannot find device "nvmf_tgt_br" 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.517 Cannot find device "nvmf_tgt_br2" 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:04.517 Cannot find device "nvmf_tgt_br" 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:04.517 Cannot find device "nvmf_tgt_br2" 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:04.517 16:31:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:04.517 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:04.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:17:04.776 00:17:04.776 --- 10.0.0.2 ping statistics --- 00:17:04.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.776 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:04.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:04.776 00:17:04.776 --- 10.0.0.3 ping statistics --- 00:17:04.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.776 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:04.776 00:17:04.776 --- 10.0.0.1 ping statistics --- 00:17:04.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.776 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:04.776 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:05.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:05.035 Waiting for block devices as requested 00:17:05.293 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:05.293 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:05.293 No valid GPT data, bailing 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:05.293 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:05.552 No valid GPT data, bailing 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:05.552 No valid GPT data, bailing 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:05.552 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:05.553 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:05.553 16:31:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:05.553 No valid GPT data, bailing 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:05.553 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -a 10.0.0.1 -t tcp -s 4420 00:17:05.811 00:17:05.811 Discovery Log Number of Records 2, Generation counter 2 00:17:05.811 =====Discovery Log Entry 0====== 00:17:05.811 trtype: tcp 00:17:05.811 adrfam: ipv4 00:17:05.811 subtype: current discovery subsystem 00:17:05.811 treq: not specified, sq flow control disable supported 00:17:05.811 portid: 1 00:17:05.811 trsvcid: 4420 00:17:05.811 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:05.811 traddr: 10.0.0.1 00:17:05.811 eflags: none 00:17:05.811 sectype: none 00:17:05.811 =====Discovery Log Entry 1====== 00:17:05.811 trtype: tcp 00:17:05.811 adrfam: ipv4 00:17:05.811 subtype: nvme subsystem 00:17:05.811 treq: not specified, sq flow control disable supported 00:17:05.811 portid: 1 00:17:05.811 trsvcid: 4420 00:17:05.811 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:05.811 traddr: 10.0.0.1 00:17:05.811 eflags: none 00:17:05.811 sectype: none 00:17:05.811 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:05.811 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:05.811 ===================================================== 00:17:05.811 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:05.811 ===================================================== 00:17:05.811 Controller Capabilities/Features 00:17:05.811 ================================ 00:17:05.811 Vendor ID: 0000 00:17:05.811 Subsystem Vendor ID: 0000 00:17:05.811 Serial Number: cce6c95524a54c8da648 00:17:05.811 Model Number: Linux 00:17:05.811 Firmware Version: 6.7.0-68 00:17:05.811 Recommended Arb Burst: 0 00:17:05.811 IEEE OUI Identifier: 00 00 00 00:17:05.811 Multi-path I/O 00:17:05.811 May have multiple subsystem ports: No 00:17:05.811 May have multiple controllers: No 00:17:05.811 Associated with SR-IOV VF: No 00:17:05.811 Max Data Transfer Size: Unlimited 00:17:05.811 Max Number of Namespaces: 0 00:17:05.811 Max Number of I/O Queues: 1024 00:17:05.811 NVMe Specification Version (VS): 1.3 00:17:05.811 NVMe Specification Version (Identify): 1.3 00:17:05.811 Maximum Queue Entries: 1024 00:17:05.811 Contiguous Queues Required: No 00:17:05.811 Arbitration Mechanisms Supported 00:17:05.811 Weighted Round Robin: Not Supported 00:17:05.811 Vendor Specific: Not Supported 00:17:05.811 Reset Timeout: 7500 ms 00:17:05.811 Doorbell Stride: 4 bytes 00:17:05.811 NVM Subsystem Reset: Not Supported 00:17:05.811 Command Sets Supported 00:17:05.811 NVM Command Set: Supported 00:17:05.811 Boot Partition: Not Supported 00:17:05.811 Memory Page Size Minimum: 4096 bytes 00:17:05.811 Memory Page Size Maximum: 4096 bytes 00:17:05.811 Persistent Memory Region: Not Supported 00:17:05.811 Optional Asynchronous Events Supported 00:17:05.811 Namespace Attribute Notices: Not Supported 00:17:05.811 Firmware Activation Notices: Not Supported 00:17:05.811 ANA Change Notices: Not Supported 00:17:05.811 PLE Aggregate Log Change Notices: Not Supported 00:17:05.811 LBA Status Info Alert Notices: Not Supported 00:17:05.811 EGE Aggregate Log Change Notices: Not Supported 00:17:05.811 Normal NVM Subsystem Shutdown event: Not Supported 00:17:05.811 Zone Descriptor Change Notices: Not Supported 00:17:05.811 Discovery Log Change Notices: Supported 00:17:05.811 Controller Attributes 00:17:05.811 128-bit Host Identifier: Not Supported 00:17:05.811 Non-Operational Permissive Mode: Not Supported 00:17:05.811 NVM Sets: Not Supported 00:17:05.811 Read Recovery Levels: Not Supported 00:17:05.811 Endurance Groups: Not Supported 00:17:05.811 Predictable Latency Mode: Not Supported 00:17:05.811 Traffic Based Keep ALive: Not Supported 00:17:05.811 Namespace Granularity: Not Supported 00:17:05.811 SQ Associations: Not Supported 00:17:05.811 UUID List: Not Supported 00:17:05.811 Multi-Domain Subsystem: Not Supported 00:17:05.811 Fixed Capacity Management: Not Supported 00:17:05.811 Variable Capacity Management: Not Supported 00:17:05.811 Delete Endurance Group: Not Supported 00:17:05.811 Delete NVM Set: Not Supported 00:17:05.811 Extended LBA Formats Supported: Not Supported 00:17:05.811 Flexible Data Placement Supported: Not Supported 00:17:05.811 00:17:05.812 Controller Memory Buffer Support 00:17:05.812 ================================ 00:17:05.812 Supported: No 00:17:05.812 00:17:05.812 Persistent Memory Region Support 00:17:05.812 ================================ 00:17:05.812 Supported: No 00:17:05.812 00:17:05.812 Admin Command Set Attributes 00:17:05.812 ============================ 00:17:05.812 Security Send/Receive: Not Supported 00:17:05.812 Format NVM: Not Supported 00:17:05.812 Firmware Activate/Download: Not Supported 00:17:05.812 Namespace Management: Not Supported 00:17:05.812 Device Self-Test: Not Supported 00:17:05.812 Directives: Not Supported 00:17:05.812 NVMe-MI: Not Supported 00:17:05.812 Virtualization Management: Not Supported 00:17:05.812 Doorbell Buffer Config: Not Supported 00:17:05.812 Get LBA Status Capability: Not Supported 00:17:05.812 Command & Feature Lockdown Capability: Not Supported 00:17:05.812 Abort Command Limit: 1 00:17:05.812 Async Event Request Limit: 1 00:17:05.812 Number of Firmware Slots: N/A 00:17:05.812 Firmware Slot 1 Read-Only: N/A 00:17:05.812 Firmware Activation Without Reset: N/A 00:17:05.812 Multiple Update Detection Support: N/A 00:17:05.812 Firmware Update Granularity: No Information Provided 00:17:05.812 Per-Namespace SMART Log: No 00:17:05.812 Asymmetric Namespace Access Log Page: Not Supported 00:17:05.812 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:05.812 Command Effects Log Page: Not Supported 00:17:05.812 Get Log Page Extended Data: Supported 00:17:05.812 Telemetry Log Pages: Not Supported 00:17:05.812 Persistent Event Log Pages: Not Supported 00:17:05.812 Supported Log Pages Log Page: May Support 00:17:05.812 Commands Supported & Effects Log Page: Not Supported 00:17:05.812 Feature Identifiers & Effects Log Page:May Support 00:17:05.812 NVMe-MI Commands & Effects Log Page: May Support 00:17:05.812 Data Area 4 for Telemetry Log: Not Supported 00:17:05.812 Error Log Page Entries Supported: 1 00:17:05.812 Keep Alive: Not Supported 00:17:05.812 00:17:05.812 NVM Command Set Attributes 00:17:05.812 ========================== 00:17:05.812 Submission Queue Entry Size 00:17:05.812 Max: 1 00:17:05.812 Min: 1 00:17:05.812 Completion Queue Entry Size 00:17:05.812 Max: 1 00:17:05.812 Min: 1 00:17:05.812 Number of Namespaces: 0 00:17:05.812 Compare Command: Not Supported 00:17:05.812 Write Uncorrectable Command: Not Supported 00:17:05.812 Dataset Management Command: Not Supported 00:17:05.812 Write Zeroes Command: Not Supported 00:17:05.812 Set Features Save Field: Not Supported 00:17:05.812 Reservations: Not Supported 00:17:05.812 Timestamp: Not Supported 00:17:05.812 Copy: Not Supported 00:17:05.812 Volatile Write Cache: Not Present 00:17:05.812 Atomic Write Unit (Normal): 1 00:17:05.812 Atomic Write Unit (PFail): 1 00:17:05.812 Atomic Compare & Write Unit: 1 00:17:05.812 Fused Compare & Write: Not Supported 00:17:05.812 Scatter-Gather List 00:17:05.812 SGL Command Set: Supported 00:17:05.812 SGL Keyed: Not Supported 00:17:05.812 SGL Bit Bucket Descriptor: Not Supported 00:17:05.812 SGL Metadata Pointer: Not Supported 00:17:05.812 Oversized SGL: Not Supported 00:17:05.812 SGL Metadata Address: Not Supported 00:17:05.812 SGL Offset: Supported 00:17:05.812 Transport SGL Data Block: Not Supported 00:17:05.812 Replay Protected Memory Block: Not Supported 00:17:05.812 00:17:05.812 Firmware Slot Information 00:17:05.812 ========================= 00:17:05.812 Active slot: 0 00:17:05.812 00:17:05.812 00:17:05.812 Error Log 00:17:05.812 ========= 00:17:05.812 00:17:05.812 Active Namespaces 00:17:05.812 ================= 00:17:05.812 Discovery Log Page 00:17:05.812 ================== 00:17:05.812 Generation Counter: 2 00:17:05.812 Number of Records: 2 00:17:05.812 Record Format: 0 00:17:05.812 00:17:05.812 Discovery Log Entry 0 00:17:05.812 ---------------------- 00:17:05.812 Transport Type: 3 (TCP) 00:17:05.812 Address Family: 1 (IPv4) 00:17:05.812 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:05.812 Entry Flags: 00:17:05.812 Duplicate Returned Information: 0 00:17:05.812 Explicit Persistent Connection Support for Discovery: 0 00:17:05.812 Transport Requirements: 00:17:05.812 Secure Channel: Not Specified 00:17:05.812 Port ID: 1 (0x0001) 00:17:05.812 Controller ID: 65535 (0xffff) 00:17:05.812 Admin Max SQ Size: 32 00:17:05.812 Transport Service Identifier: 4420 00:17:05.812 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:05.812 Transport Address: 10.0.0.1 00:17:05.812 Discovery Log Entry 1 00:17:05.812 ---------------------- 00:17:05.812 Transport Type: 3 (TCP) 00:17:05.812 Address Family: 1 (IPv4) 00:17:05.812 Subsystem Type: 2 (NVM Subsystem) 00:17:05.812 Entry Flags: 00:17:05.812 Duplicate Returned Information: 0 00:17:05.812 Explicit Persistent Connection Support for Discovery: 0 00:17:05.812 Transport Requirements: 00:17:05.812 Secure Channel: Not Specified 00:17:05.812 Port ID: 1 (0x0001) 00:17:05.812 Controller ID: 65535 (0xffff) 00:17:05.812 Admin Max SQ Size: 32 00:17:05.812 Transport Service Identifier: 4420 00:17:05.812 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:05.812 Transport Address: 10.0.0.1 00:17:05.812 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:06.071 get_feature(0x01) failed 00:17:06.071 get_feature(0x02) failed 00:17:06.071 get_feature(0x04) failed 00:17:06.071 ===================================================== 00:17:06.071 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:06.071 ===================================================== 00:17:06.071 Controller Capabilities/Features 00:17:06.071 ================================ 00:17:06.071 Vendor ID: 0000 00:17:06.071 Subsystem Vendor ID: 0000 00:17:06.071 Serial Number: a4fcfbc1febb707ccb97 00:17:06.071 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:06.071 Firmware Version: 6.7.0-68 00:17:06.071 Recommended Arb Burst: 6 00:17:06.071 IEEE OUI Identifier: 00 00 00 00:17:06.071 Multi-path I/O 00:17:06.071 May have multiple subsystem ports: Yes 00:17:06.071 May have multiple controllers: Yes 00:17:06.071 Associated with SR-IOV VF: No 00:17:06.071 Max Data Transfer Size: Unlimited 00:17:06.071 Max Number of Namespaces: 1024 00:17:06.071 Max Number of I/O Queues: 128 00:17:06.071 NVMe Specification Version (VS): 1.3 00:17:06.071 NVMe Specification Version (Identify): 1.3 00:17:06.071 Maximum Queue Entries: 1024 00:17:06.071 Contiguous Queues Required: No 00:17:06.071 Arbitration Mechanisms Supported 00:17:06.071 Weighted Round Robin: Not Supported 00:17:06.071 Vendor Specific: Not Supported 00:17:06.071 Reset Timeout: 7500 ms 00:17:06.071 Doorbell Stride: 4 bytes 00:17:06.071 NVM Subsystem Reset: Not Supported 00:17:06.071 Command Sets Supported 00:17:06.071 NVM Command Set: Supported 00:17:06.071 Boot Partition: Not Supported 00:17:06.071 Memory Page Size Minimum: 4096 bytes 00:17:06.071 Memory Page Size Maximum: 4096 bytes 00:17:06.071 Persistent Memory Region: Not Supported 00:17:06.071 Optional Asynchronous Events Supported 00:17:06.071 Namespace Attribute Notices: Supported 00:17:06.071 Firmware Activation Notices: Not Supported 00:17:06.072 ANA Change Notices: Supported 00:17:06.072 PLE Aggregate Log Change Notices: Not Supported 00:17:06.072 LBA Status Info Alert Notices: Not Supported 00:17:06.072 EGE Aggregate Log Change Notices: Not Supported 00:17:06.072 Normal NVM Subsystem Shutdown event: Not Supported 00:17:06.072 Zone Descriptor Change Notices: Not Supported 00:17:06.072 Discovery Log Change Notices: Not Supported 00:17:06.072 Controller Attributes 00:17:06.072 128-bit Host Identifier: Supported 00:17:06.072 Non-Operational Permissive Mode: Not Supported 00:17:06.072 NVM Sets: Not Supported 00:17:06.072 Read Recovery Levels: Not Supported 00:17:06.072 Endurance Groups: Not Supported 00:17:06.072 Predictable Latency Mode: Not Supported 00:17:06.072 Traffic Based Keep ALive: Supported 00:17:06.072 Namespace Granularity: Not Supported 00:17:06.072 SQ Associations: Not Supported 00:17:06.072 UUID List: Not Supported 00:17:06.072 Multi-Domain Subsystem: Not Supported 00:17:06.072 Fixed Capacity Management: Not Supported 00:17:06.072 Variable Capacity Management: Not Supported 00:17:06.072 Delete Endurance Group: Not Supported 00:17:06.072 Delete NVM Set: Not Supported 00:17:06.072 Extended LBA Formats Supported: Not Supported 00:17:06.072 Flexible Data Placement Supported: Not Supported 00:17:06.072 00:17:06.072 Controller Memory Buffer Support 00:17:06.072 ================================ 00:17:06.072 Supported: No 00:17:06.072 00:17:06.072 Persistent Memory Region Support 00:17:06.072 ================================ 00:17:06.072 Supported: No 00:17:06.072 00:17:06.072 Admin Command Set Attributes 00:17:06.072 ============================ 00:17:06.072 Security Send/Receive: Not Supported 00:17:06.072 Format NVM: Not Supported 00:17:06.072 Firmware Activate/Download: Not Supported 00:17:06.072 Namespace Management: Not Supported 00:17:06.072 Device Self-Test: Not Supported 00:17:06.072 Directives: Not Supported 00:17:06.072 NVMe-MI: Not Supported 00:17:06.072 Virtualization Management: Not Supported 00:17:06.072 Doorbell Buffer Config: Not Supported 00:17:06.072 Get LBA Status Capability: Not Supported 00:17:06.072 Command & Feature Lockdown Capability: Not Supported 00:17:06.072 Abort Command Limit: 4 00:17:06.072 Async Event Request Limit: 4 00:17:06.072 Number of Firmware Slots: N/A 00:17:06.072 Firmware Slot 1 Read-Only: N/A 00:17:06.072 Firmware Activation Without Reset: N/A 00:17:06.072 Multiple Update Detection Support: N/A 00:17:06.072 Firmware Update Granularity: No Information Provided 00:17:06.072 Per-Namespace SMART Log: Yes 00:17:06.072 Asymmetric Namespace Access Log Page: Supported 00:17:06.072 ANA Transition Time : 10 sec 00:17:06.072 00:17:06.072 Asymmetric Namespace Access Capabilities 00:17:06.072 ANA Optimized State : Supported 00:17:06.072 ANA Non-Optimized State : Supported 00:17:06.072 ANA Inaccessible State : Supported 00:17:06.072 ANA Persistent Loss State : Supported 00:17:06.072 ANA Change State : Supported 00:17:06.072 ANAGRPID is not changed : No 00:17:06.072 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:06.072 00:17:06.072 ANA Group Identifier Maximum : 128 00:17:06.072 Number of ANA Group Identifiers : 128 00:17:06.072 Max Number of Allowed Namespaces : 1024 00:17:06.072 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:06.072 Command Effects Log Page: Supported 00:17:06.072 Get Log Page Extended Data: Supported 00:17:06.072 Telemetry Log Pages: Not Supported 00:17:06.072 Persistent Event Log Pages: Not Supported 00:17:06.072 Supported Log Pages Log Page: May Support 00:17:06.072 Commands Supported & Effects Log Page: Not Supported 00:17:06.072 Feature Identifiers & Effects Log Page:May Support 00:17:06.072 NVMe-MI Commands & Effects Log Page: May Support 00:17:06.072 Data Area 4 for Telemetry Log: Not Supported 00:17:06.072 Error Log Page Entries Supported: 128 00:17:06.072 Keep Alive: Supported 00:17:06.072 Keep Alive Granularity: 1000 ms 00:17:06.072 00:17:06.072 NVM Command Set Attributes 00:17:06.072 ========================== 00:17:06.072 Submission Queue Entry Size 00:17:06.072 Max: 64 00:17:06.072 Min: 64 00:17:06.072 Completion Queue Entry Size 00:17:06.072 Max: 16 00:17:06.072 Min: 16 00:17:06.072 Number of Namespaces: 1024 00:17:06.072 Compare Command: Not Supported 00:17:06.072 Write Uncorrectable Command: Not Supported 00:17:06.072 Dataset Management Command: Supported 00:17:06.072 Write Zeroes Command: Supported 00:17:06.072 Set Features Save Field: Not Supported 00:17:06.072 Reservations: Not Supported 00:17:06.072 Timestamp: Not Supported 00:17:06.072 Copy: Not Supported 00:17:06.072 Volatile Write Cache: Present 00:17:06.072 Atomic Write Unit (Normal): 1 00:17:06.072 Atomic Write Unit (PFail): 1 00:17:06.072 Atomic Compare & Write Unit: 1 00:17:06.072 Fused Compare & Write: Not Supported 00:17:06.072 Scatter-Gather List 00:17:06.072 SGL Command Set: Supported 00:17:06.072 SGL Keyed: Not Supported 00:17:06.072 SGL Bit Bucket Descriptor: Not Supported 00:17:06.072 SGL Metadata Pointer: Not Supported 00:17:06.072 Oversized SGL: Not Supported 00:17:06.072 SGL Metadata Address: Not Supported 00:17:06.072 SGL Offset: Supported 00:17:06.072 Transport SGL Data Block: Not Supported 00:17:06.072 Replay Protected Memory Block: Not Supported 00:17:06.072 00:17:06.072 Firmware Slot Information 00:17:06.072 ========================= 00:17:06.072 Active slot: 0 00:17:06.072 00:17:06.072 Asymmetric Namespace Access 00:17:06.072 =========================== 00:17:06.072 Change Count : 0 00:17:06.073 Number of ANA Group Descriptors : 1 00:17:06.073 ANA Group Descriptor : 0 00:17:06.073 ANA Group ID : 1 00:17:06.073 Number of NSID Values : 1 00:17:06.073 Change Count : 0 00:17:06.073 ANA State : 1 00:17:06.073 Namespace Identifier : 1 00:17:06.073 00:17:06.073 Commands Supported and Effects 00:17:06.073 ============================== 00:17:06.073 Admin Commands 00:17:06.073 -------------- 00:17:06.073 Get Log Page (02h): Supported 00:17:06.073 Identify (06h): Supported 00:17:06.073 Abort (08h): Supported 00:17:06.073 Set Features (09h): Supported 00:17:06.073 Get Features (0Ah): Supported 00:17:06.073 Asynchronous Event Request (0Ch): Supported 00:17:06.073 Keep Alive (18h): Supported 00:17:06.073 I/O Commands 00:17:06.073 ------------ 00:17:06.073 Flush (00h): Supported 00:17:06.073 Write (01h): Supported LBA-Change 00:17:06.073 Read (02h): Supported 00:17:06.073 Write Zeroes (08h): Supported LBA-Change 00:17:06.073 Dataset Management (09h): Supported 00:17:06.073 00:17:06.073 Error Log 00:17:06.073 ========= 00:17:06.073 Entry: 0 00:17:06.073 Error Count: 0x3 00:17:06.073 Submission Queue Id: 0x0 00:17:06.073 Command Id: 0x5 00:17:06.073 Phase Bit: 0 00:17:06.073 Status Code: 0x2 00:17:06.073 Status Code Type: 0x0 00:17:06.073 Do Not Retry: 1 00:17:06.073 Error Location: 0x28 00:17:06.073 LBA: 0x0 00:17:06.073 Namespace: 0x0 00:17:06.073 Vendor Log Page: 0x0 00:17:06.073 ----------- 00:17:06.073 Entry: 1 00:17:06.073 Error Count: 0x2 00:17:06.073 Submission Queue Id: 0x0 00:17:06.073 Command Id: 0x5 00:17:06.073 Phase Bit: 0 00:17:06.073 Status Code: 0x2 00:17:06.073 Status Code Type: 0x0 00:17:06.073 Do Not Retry: 1 00:17:06.073 Error Location: 0x28 00:17:06.073 LBA: 0x0 00:17:06.073 Namespace: 0x0 00:17:06.073 Vendor Log Page: 0x0 00:17:06.073 ----------- 00:17:06.073 Entry: 2 00:17:06.073 Error Count: 0x1 00:17:06.073 Submission Queue Id: 0x0 00:17:06.073 Command Id: 0x4 00:17:06.073 Phase Bit: 0 00:17:06.073 Status Code: 0x2 00:17:06.073 Status Code Type: 0x0 00:17:06.073 Do Not Retry: 1 00:17:06.073 Error Location: 0x28 00:17:06.073 LBA: 0x0 00:17:06.073 Namespace: 0x0 00:17:06.073 Vendor Log Page: 0x0 00:17:06.073 00:17:06.073 Number of Queues 00:17:06.073 ================ 00:17:06.073 Number of I/O Submission Queues: 128 00:17:06.073 Number of I/O Completion Queues: 128 00:17:06.073 00:17:06.073 ZNS Specific Controller Data 00:17:06.073 ============================ 00:17:06.073 Zone Append Size Limit: 0 00:17:06.073 00:17:06.073 00:17:06.073 Active Namespaces 00:17:06.073 ================= 00:17:06.073 get_feature(0x05) failed 00:17:06.073 Namespace ID:1 00:17:06.073 Command Set Identifier: NVM (00h) 00:17:06.073 Deallocate: Supported 00:17:06.073 Deallocated/Unwritten Error: Not Supported 00:17:06.073 Deallocated Read Value: Unknown 00:17:06.073 Deallocate in Write Zeroes: Not Supported 00:17:06.073 Deallocated Guard Field: 0xFFFF 00:17:06.073 Flush: Supported 00:17:06.073 Reservation: Not Supported 00:17:06.073 Namespace Sharing Capabilities: Multiple Controllers 00:17:06.073 Size (in LBAs): 1310720 (5GiB) 00:17:06.073 Capacity (in LBAs): 1310720 (5GiB) 00:17:06.073 Utilization (in LBAs): 1310720 (5GiB) 00:17:06.073 UUID: 2e94903b-8b91-498f-9cce-c57dce0c89ef 00:17:06.073 Thin Provisioning: Not Supported 00:17:06.073 Per-NS Atomic Units: Yes 00:17:06.073 Atomic Boundary Size (Normal): 0 00:17:06.073 Atomic Boundary Size (PFail): 0 00:17:06.073 Atomic Boundary Offset: 0 00:17:06.073 NGUID/EUI64 Never Reused: No 00:17:06.073 ANA group ID: 1 00:17:06.073 Namespace Write Protected: No 00:17:06.073 Number of LBA Formats: 1 00:17:06.073 Current LBA Format: LBA Format #00 00:17:06.073 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:06.073 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.073 rmmod nvme_tcp 00:17:06.073 rmmod nvme_fabrics 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.073 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:06.074 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:06.343 16:31:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:06.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:06.909 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:06.909 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:07.168 00:17:07.168 real 0m2.834s 00:17:07.168 user 0m1.001s 00:17:07.168 sys 0m1.368s 00:17:07.168 16:31:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.168 16:31:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.168 ************************************ 00:17:07.168 END TEST nvmf_identify_kernel_target 00:17:07.168 ************************************ 00:17:07.168 16:31:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:07.168 16:31:52 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:07.168 16:31:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:07.168 16:31:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.168 16:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.168 ************************************ 00:17:07.168 START TEST nvmf_auth_host 00:17:07.168 ************************************ 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:07.168 * Looking for test storage... 00:17:07.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.168 16:31:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:07.169 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:07.427 Cannot find device "nvmf_tgt_br" 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.427 Cannot find device "nvmf_tgt_br2" 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:07.427 Cannot find device "nvmf_tgt_br" 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:07.427 Cannot find device "nvmf_tgt_br2" 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.427 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.684 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.684 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:07.684 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:07.684 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:07.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:17:07.684 00:17:07.684 --- 10.0.0.2 ping statistics --- 00:17:07.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.684 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:07.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:07.684 00:17:07.684 --- 10.0.0.3 ping statistics --- 00:17:07.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.684 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:07.684 00:17:07.684 --- 10.0.0.1 ping statistics --- 00:17:07.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.684 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78638 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78638 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78638 ']' 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.684 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.619 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.619 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:08.619 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.619 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.619 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de1409d8da1d58daf7e937c618079d13 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5kB 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de1409d8da1d58daf7e937c618079d13 0 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de1409d8da1d58daf7e937c618079d13 0 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de1409d8da1d58daf7e937c618079d13 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5kB 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5kB 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5kB 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f6aca7e00f68db6942a9e80d189efee517d12d3838d037e03789d54bf8d4f5f6 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XSr 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f6aca7e00f68db6942a9e80d189efee517d12d3838d037e03789d54bf8d4f5f6 3 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f6aca7e00f68db6942a9e80d189efee517d12d3838d037e03789d54bf8d4f5f6 3 00:17:08.878 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f6aca7e00f68db6942a9e80d189efee517d12d3838d037e03789d54bf8d4f5f6 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XSr 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XSr 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XSr 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0b18c8480079febf485d14424a28ad97d48248b1c2db89fc 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QeE 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0b18c8480079febf485d14424a28ad97d48248b1c2db89fc 0 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0b18c8480079febf485d14424a28ad97d48248b1c2db89fc 0 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0b18c8480079febf485d14424a28ad97d48248b1c2db89fc 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QeE 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QeE 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QeE 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=795fb2d124c1c2258ae6a2d75a9fb1d7e06f10f8324c5478 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8dp 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 795fb2d124c1c2258ae6a2d75a9fb1d7e06f10f8324c5478 2 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 795fb2d124c1c2258ae6a2d75a9fb1d7e06f10f8324c5478 2 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=795fb2d124c1c2258ae6a2d75a9fb1d7e06f10f8324c5478 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8dp 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8dp 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.8dp 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:08.879 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=35bcafae16bc45c8d9982f9734a0d96e 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jfG 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 35bcafae16bc45c8d9982f9734a0d96e 1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 35bcafae16bc45c8d9982f9734a0d96e 1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=35bcafae16bc45c8d9982f9734a0d96e 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jfG 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jfG 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jfG 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e994eb602b9f036ffab38407f3acc899 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Qpd 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e994eb602b9f036ffab38407f3acc899 1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e994eb602b9f036ffab38407f3acc899 1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e994eb602b9f036ffab38407f3acc899 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Qpd 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Qpd 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Qpd 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cfa3a76e71e191f15a3570aeaead857903b5ea69d97e4f88 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KgU 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cfa3a76e71e191f15a3570aeaead857903b5ea69d97e4f88 2 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cfa3a76e71e191f15a3570aeaead857903b5ea69d97e4f88 2 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cfa3a76e71e191f15a3570aeaead857903b5ea69d97e4f88 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KgU 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KgU 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.KgU 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:09.138 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b848228f7b20b0c20f101a2e131a6f1 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ag0 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b848228f7b20b0c20f101a2e131a6f1 0 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b848228f7b20b0c20f101a2e131a6f1 0 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b848228f7b20b0c20f101a2e131a6f1 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ag0 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ag0 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Ag0 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8562085412e117dd7a4f56bb8e4265586537778dd5b11f1623d7f9a9c9c0b7c1 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KE7 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8562085412e117dd7a4f56bb8e4265586537778dd5b11f1623d7f9a9c9c0b7c1 3 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8562085412e117dd7a4f56bb8e4265586537778dd5b11f1623d7f9a9c9c0b7c1 3 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8562085412e117dd7a4f56bb8e4265586537778dd5b11f1623d7f9a9c9c0b7c1 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:09.139 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KE7 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KE7 00:17:09.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KE7 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78638 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78638 ']' 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.397 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5kB 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XSr ]] 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XSr 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QeE 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.8dp ]] 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8dp 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jfG 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Qpd ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qpd 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.KgU 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Ag0 ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Ag0 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KE7 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.656 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:09.657 16:31:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:09.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.915 Waiting for block devices as requested 00:17:09.915 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:10.174 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:10.832 No valid GPT data, bailing 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:10.832 No valid GPT data, bailing 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:10.832 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:10.833 No valid GPT data, bailing 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:10.833 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:11.091 No valid GPT data, bailing 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -a 10.0.0.1 -t tcp -s 4420 00:17:11.091 00:17:11.091 Discovery Log Number of Records 2, Generation counter 2 00:17:11.091 =====Discovery Log Entry 0====== 00:17:11.091 trtype: tcp 00:17:11.091 adrfam: ipv4 00:17:11.091 subtype: current discovery subsystem 00:17:11.091 treq: not specified, sq flow control disable supported 00:17:11.091 portid: 1 00:17:11.091 trsvcid: 4420 00:17:11.091 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:11.091 traddr: 10.0.0.1 00:17:11.091 eflags: none 00:17:11.091 sectype: none 00:17:11.091 =====Discovery Log Entry 1====== 00:17:11.091 trtype: tcp 00:17:11.091 adrfam: ipv4 00:17:11.091 subtype: nvme subsystem 00:17:11.091 treq: not specified, sq flow control disable supported 00:17:11.091 portid: 1 00:17:11.091 trsvcid: 4420 00:17:11.091 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:11.091 traddr: 10.0.0.1 00:17:11.091 eflags: none 00:17:11.091 sectype: none 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.091 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.092 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.350 nvme0n1 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:11.350 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.351 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.609 nvme0n1 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.609 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.610 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.610 nvme0n1 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.610 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.868 nvme0n1 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.868 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.127 nvme0n1 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.127 nvme0n1 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.127 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.385 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.385 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.385 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.385 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.385 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.386 16:31:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.645 nvme0n1 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.645 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.904 nvme0n1 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.904 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.163 nvme0n1 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:13.163 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.164 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.422 nvme0n1 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.422 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.423 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 nvme0n1 00:17:13.681 16:31:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.681 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.270 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:14.270 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.271 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.529 nvme0n1 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.529 16:31:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.786 nvme0n1 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.786 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.787 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.044 nvme0n1 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.044 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.302 nvme0n1 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.302 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.560 nvme0n1 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.560 16:32:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.461 16:32:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.721 nvme0n1 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.721 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.722 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.986 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.244 nvme0n1 00:17:18.244 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.244 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.244 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.244 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.244 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.245 16:32:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.503 nvme0n1 00:17:18.503 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.503 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.503 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.503 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.503 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.762 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.022 nvme0n1 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.022 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.281 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.540 nvme0n1 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.540 16:32:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.540 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 nvme0n1 00:17:20.105 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.105 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.105 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.105 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.105 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.105 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.363 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.364 16:32:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.927 nvme0n1 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.928 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.494 nvme0n1 00:17:21.495 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.495 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.495 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.495 16:32:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.495 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.495 16:32:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.495 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.495 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.495 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.495 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.754 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.754 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.754 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:21.754 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.754 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.755 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.322 nvme0n1 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.322 16:32:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.888 nvme0n1 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.888 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.147 nvme0n1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.147 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.406 nvme0n1 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:23.406 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.407 nvme0n1 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.407 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.666 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.666 16:32:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.666 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.666 16:32:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.666 nvme0n1 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.666 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.667 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.926 nvme0n1 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.926 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.185 nvme0n1 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.185 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.186 nvme0n1 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.186 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:24.444 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.445 nvme0n1 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.445 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.703 16:32:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.703 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.703 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:24.703 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.703 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.703 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.704 16:32:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.704 nvme0n1 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.704 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.962 nvme0n1 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.962 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.963 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.221 nvme0n1 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.221 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.222 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.480 nvme0n1 00:17:25.480 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.480 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.481 16:32:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.739 nvme0n1 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.739 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.740 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.740 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.740 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.997 nvme0n1 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.997 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.998 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.256 nvme0n1 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.256 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.515 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.516 16:32:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.516 16:32:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.516 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.516 16:32:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.775 nvme0n1 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.775 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.343 nvme0n1 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.343 16:32:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.629 nvme0n1 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.629 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.900 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.162 nvme0n1 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.162 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.421 nvme0n1 00:17:28.421 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.421 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.422 16:32:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.422 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.422 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.422 16:32:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.680 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.681 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.250 nvme0n1 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.250 16:32:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.819 nvme0n1 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.819 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.078 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.079 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.646 nvme0n1 00:17:30.646 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.646 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.646 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.646 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.646 16:32:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.646 16:32:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.646 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.213 nvme0n1 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.213 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.472 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.473 16:32:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.041 nvme0n1 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.041 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.301 nvme0n1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.301 nvme0n1 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.301 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.561 nvme0n1 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.561 16:32:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.561 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.562 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 nvme0n1 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 nvme0n1 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.821 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.081 nvme0n1 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.081 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.341 nvme0n1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.341 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.342 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.342 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.600 nvme0n1 00:17:33.600 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.600 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.601 16:32:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.601 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.601 16:32:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.601 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 nvme0n1 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.860 nvme0n1 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.860 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.120 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.380 nvme0n1 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.380 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.639 nvme0n1 00:17:34.639 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.639 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.639 16:32:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.639 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.639 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.639 16:32:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.639 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.640 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.899 nvme0n1 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.899 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 nvme0n1 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.158 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 nvme0n1 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:35.417 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.418 16:32:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.677 nvme0n1 00:17:35.677 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.677 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.677 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.677 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.677 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.677 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.936 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.937 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 nvme0n1 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.196 16:32:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.764 nvme0n1 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.764 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.765 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.024 nvme0n1 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.024 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.282 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.541 nvme0n1 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.541 16:32:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:37.541 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGUxNDA5ZDhkYTFkNThkYWY3ZTkzN2M2MTgwNzlkMTMjP10n: 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: ]] 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjZhY2E3ZTAwZjY4ZGI2OTQyYTllODBkMTg5ZWZlZTUxN2QxMmQzODM4ZDAzN2UwMzc4OWQ1NGJmOGQ0ZjVmNsqKmzA=: 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.542 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.110 nvme0n1 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.110 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.368 16:32:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.934 nvme0n1 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzViY2FmYWUxNmJjNDVjOGQ5OTgyZjk3MzRhMGQ5NmUGVLTW: 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTk5NGViNjAyYjlmMDM2ZmZhYjM4NDA3ZjNhY2M4OTmnHeb6: 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.934 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.500 nvme0n1 00:17:39.500 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.500 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.500 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.500 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.500 16:32:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.500 16:32:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.500 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.500 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.500 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.500 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.500 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2ZhM2E3NmU3MWUxOTFmMTVhMzU3MGFlYWVhZDg1NzkwM2I1ZWE2OWQ5N2U0Zjg4HME5SQ==: 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: ]] 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI4NDgyMjhmN2IyMGIwYzIwZjEwMWEyZTEzMWE2ZjGBNQLs: 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.501 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.759 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 nvme0n1 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODU2MjA4NTQxMmUxMTdkZDdhNGY1NmJiOGU0MjY1NTg2NTM3Nzc4ZGQ1YjExZjE2MjNkN2Y5YTljOWMwYjdjMeNUJWA=: 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 16:32:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.891 nvme0n1 00:17:40.891 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.891 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.891 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.891 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.891 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIxOGM4NDgwMDc5ZmViZjQ4NWQxNDQyNGEyOGFkOTdkNDgyNDhiMWMyZGI4OWZjKOPYRw==: 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Nzk1ZmIyZDEyNGMxYzIyNThhZTZhMmQ3NWE5ZmIxZDdlMDZmMTBmODMyNGM1NDc4c+jQ5Q==: 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.892 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.150 request: 00:17:41.150 { 00:17:41.150 "name": "nvme0", 00:17:41.150 "trtype": "tcp", 00:17:41.150 "traddr": "10.0.0.1", 00:17:41.150 "adrfam": "ipv4", 00:17:41.150 "trsvcid": "4420", 00:17:41.150 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:41.150 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:41.150 "prchk_reftag": false, 00:17:41.150 "prchk_guard": false, 00:17:41.150 "hdgst": false, 00:17:41.150 "ddgst": false, 00:17:41.150 "method": "bdev_nvme_attach_controller", 00:17:41.150 "req_id": 1 00:17:41.150 } 00:17:41.150 Got JSON-RPC error response 00:17:41.150 response: 00:17:41.150 { 00:17:41.150 "code": -5, 00:17:41.150 "message": "Input/output error" 00:17:41.150 } 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.150 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.150 request: 00:17:41.150 { 00:17:41.150 "name": "nvme0", 00:17:41.150 "trtype": "tcp", 00:17:41.150 "traddr": "10.0.0.1", 00:17:41.150 "adrfam": "ipv4", 00:17:41.151 "trsvcid": "4420", 00:17:41.151 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:41.151 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:41.151 "prchk_reftag": false, 00:17:41.151 "prchk_guard": false, 00:17:41.151 "hdgst": false, 00:17:41.151 "ddgst": false, 00:17:41.151 "dhchap_key": "key2", 00:17:41.151 "method": "bdev_nvme_attach_controller", 00:17:41.151 "req_id": 1 00:17:41.151 } 00:17:41.151 Got JSON-RPC error response 00:17:41.151 response: 00:17:41.151 { 00:17:41.151 "code": -5, 00:17:41.151 "message": "Input/output error" 00:17:41.151 } 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.151 request: 00:17:41.151 { 00:17:41.151 "name": "nvme0", 00:17:41.151 "trtype": "tcp", 00:17:41.151 "traddr": "10.0.0.1", 00:17:41.151 "adrfam": "ipv4", 00:17:41.151 "trsvcid": "4420", 00:17:41.151 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:41.151 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:41.151 "prchk_reftag": false, 00:17:41.151 "prchk_guard": false, 00:17:41.151 "hdgst": false, 00:17:41.151 "ddgst": false, 00:17:41.151 "dhchap_key": "key1", 00:17:41.151 "dhchap_ctrlr_key": "ckey2", 00:17:41.151 "method": "bdev_nvme_attach_controller", 00:17:41.151 "req_id": 1 00:17:41.151 } 00:17:41.151 Got JSON-RPC error response 00:17:41.151 response: 00:17:41.151 { 00:17:41.151 "code": -5, 00:17:41.151 "message": "Input/output error" 00:17:41.151 } 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.151 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.151 rmmod nvme_tcp 00:17:41.151 rmmod nvme_fabrics 00:17:41.409 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.409 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:41.409 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:41.409 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78638 ']' 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78638 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78638 ']' 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78638 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78638 00:17:41.410 killing process with pid 78638 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78638' 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78638 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78638 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.410 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.668 16:32:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.668 16:32:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:41.668 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:41.668 16:32:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:41.668 16:32:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:42.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:42.520 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:42.520 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:42.520 16:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5kB /tmp/spdk.key-null.QeE /tmp/spdk.key-sha256.jfG /tmp/spdk.key-sha384.KgU /tmp/spdk.key-sha512.KE7 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:42.520 16:32:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:42.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:42.778 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:42.778 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:43.037 00:17:43.037 real 0m35.801s 00:17:43.037 user 0m32.042s 00:17:43.037 sys 0m3.837s 00:17:43.037 16:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.037 16:32:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.037 ************************************ 00:17:43.037 END TEST nvmf_auth_host 00:17:43.037 ************************************ 00:17:43.037 16:32:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:43.037 16:32:28 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:43.037 16:32:28 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:43.037 16:32:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.037 16:32:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.037 16:32:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.037 ************************************ 00:17:43.037 START TEST nvmf_digest 00:17:43.037 ************************************ 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:43.037 * Looking for test storage... 00:17:43.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.037 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:43.038 Cannot find device "nvmf_tgt_br" 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:43.038 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.297 Cannot find device "nvmf_tgt_br2" 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:43.297 Cannot find device "nvmf_tgt_br" 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:43.297 Cannot find device "nvmf_tgt_br2" 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:43.297 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:43.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:17:43.556 00:17:43.556 --- 10.0.0.2 ping statistics --- 00:17:43.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.556 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:43.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:43.556 00:17:43.556 --- 10.0.0.3 ping statistics --- 00:17:43.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.556 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:17:43.556 00:17:43.556 --- 10.0.0.1 ping statistics --- 00:17:43.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.556 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:43.556 ************************************ 00:17:43.556 START TEST nvmf_digest_clean 00:17:43.556 ************************************ 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:43.556 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80211 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80211 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80211 ']' 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.557 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.557 [2024-07-15 16:32:28.995699] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:43.557 [2024-07-15 16:32:28.995790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.815 [2024-07-15 16:32:29.138095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.815 [2024-07-15 16:32:29.265971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.815 [2024-07-15 16:32:29.266035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.815 [2024-07-15 16:32:29.266058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.815 [2024-07-15 16:32:29.266069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.815 [2024-07-15 16:32:29.266078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.815 [2024-07-15 16:32:29.266115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:44.752 [2024-07-15 16:32:30.117237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:44.752 null0 00:17:44.752 [2024-07-15 16:32:30.168923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.752 [2024-07-15 16:32:30.193112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80243 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80243 /var/tmp/bperf.sock 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80243 ']' 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:44.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.752 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:44.752 [2024-07-15 16:32:30.248670] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:44.752 [2024-07-15 16:32:30.249136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80243 ] 00:17:45.012 [2024-07-15 16:32:30.382433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.012 [2024-07-15 16:32:30.505227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.948 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.948 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:45.948 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:45.948 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:45.948 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:46.207 [2024-07-15 16:32:31.508684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:46.207 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.207 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.466 nvme0n1 00:17:46.466 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:46.466 16:32:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:46.466 Running I/O for 2 seconds... 00:17:49.001 00:17:49.001 Latency(us) 00:17:49.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.001 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:49.001 nvme0n1 : 2.01 14971.59 58.48 0.00 0.00 8542.95 3574.69 24427.05 00:17:49.001 =================================================================================================================== 00:17:49.001 Total : 14971.59 58.48 0.00 0.00 8542.95 3574.69 24427.05 00:17:49.001 0 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:49.001 | select(.opcode=="crc32c") 00:17:49.001 | "\(.module_name) \(.executed)"' 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80243 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80243 ']' 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80243 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80243 00:17:49.001 killing process with pid 80243 00:17:49.001 Received shutdown signal, test time was about 2.000000 seconds 00:17:49.001 00:17:49.001 Latency(us) 00:17:49.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.001 =================================================================================================================== 00:17:49.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80243' 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80243 00:17:49.001 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80243 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80305 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80305 /var/tmp/bperf.sock 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80305 ']' 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:49.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.261 16:32:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:49.261 [2024-07-15 16:32:34.629907] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:49.261 [2024-07-15 16:32:34.630319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80305 ] 00:17:49.261 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:49.261 Zero copy mechanism will not be used. 00:17:49.261 [2024-07-15 16:32:34.769296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.520 [2024-07-15 16:32:34.885446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.087 16:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.087 16:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:50.087 16:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:50.087 16:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:50.087 16:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:50.346 [2024-07-15 16:32:35.856689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:50.603 16:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.603 16:32:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.862 nvme0n1 00:17:50.862 16:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:50.862 16:32:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:50.862 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:50.862 Zero copy mechanism will not be used. 00:17:50.862 Running I/O for 2 seconds... 00:17:53.397 00:17:53.397 Latency(us) 00:17:53.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.397 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:53.397 nvme0n1 : 2.00 7875.63 984.45 0.00 0.00 2028.28 1705.43 3783.21 00:17:53.397 =================================================================================================================== 00:17:53.397 Total : 7875.63 984.45 0.00 0.00 2028.28 1705.43 3783.21 00:17:53.397 0 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:53.397 | select(.opcode=="crc32c") 00:17:53.397 | "\(.module_name) \(.executed)"' 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80305 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80305 ']' 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80305 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80305 00:17:53.397 killing process with pid 80305 00:17:53.397 Received shutdown signal, test time was about 2.000000 seconds 00:17:53.397 00:17:53.397 Latency(us) 00:17:53.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.397 =================================================================================================================== 00:17:53.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80305' 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80305 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80305 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80365 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80365 /var/tmp/bperf.sock 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80365 ']' 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:53.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.397 16:32:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:53.397 [2024-07-15 16:32:38.918838] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:53.397 [2024-07-15 16:32:38.919182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80365 ] 00:17:53.656 [2024-07-15 16:32:39.058345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.656 [2024-07-15 16:32:39.167115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.589 16:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.589 16:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:54.589 16:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:54.589 16:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:54.589 16:32:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:54.845 [2024-07-15 16:32:40.188129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:54.845 16:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.845 16:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.104 nvme0n1 00:17:55.104 16:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:55.104 16:32:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:55.361 Running I/O for 2 seconds... 00:17:57.261 00:17:57.261 Latency(us) 00:17:57.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.262 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.262 nvme0n1 : 2.01 16336.90 63.82 0.00 0.00 7828.03 2368.23 15847.80 00:17:57.262 =================================================================================================================== 00:17:57.262 Total : 16336.90 63.82 0.00 0.00 7828.03 2368.23 15847.80 00:17:57.262 0 00:17:57.262 16:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:57.262 16:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:57.262 16:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:57.262 16:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:57.262 | select(.opcode=="crc32c") 00:17:57.262 | "\(.module_name) \(.executed)"' 00:17:57.262 16:32:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80365 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80365 ']' 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80365 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80365 00:17:57.519 killing process with pid 80365 00:17:57.519 Received shutdown signal, test time was about 2.000000 seconds 00:17:57.519 00:17:57.519 Latency(us) 00:17:57.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.519 =================================================================================================================== 00:17:57.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80365' 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80365 00:17:57.519 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80365 00:17:57.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80424 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80424 /var/tmp/bperf.sock 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80424 ']' 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.777 16:32:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:58.072 [2024-07-15 16:32:43.344800] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:17:58.072 [2024-07-15 16:32:43.345163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:58.072 Zero copy mechanism will not be used. 00:17:58.072 llocations --file-prefix=spdk_pid80424 ] 00:17:58.072 [2024-07-15 16:32:43.486782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.072 [2024-07-15 16:32:43.604775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.004 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.004 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:59.004 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:59.004 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:59.004 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:59.261 [2024-07-15 16:32:44.591612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:59.261 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.261 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.518 nvme0n1 00:17:59.518 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:59.518 16:32:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:59.518 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:59.518 Zero copy mechanism will not be used. 00:17:59.518 Running I/O for 2 seconds... 00:18:02.049 00:18:02.049 Latency(us) 00:18:02.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.049 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:02.049 nvme0n1 : 2.00 6341.86 792.73 0.00 0.00 2517.12 2159.71 11141.12 00:18:02.049 =================================================================================================================== 00:18:02.049 Total : 6341.86 792.73 0.00 0.00 2517.12 2159.71 11141.12 00:18:02.049 0 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:02.049 | select(.opcode=="crc32c") 00:18:02.049 | "\(.module_name) \(.executed)"' 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80424 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80424 ']' 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80424 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80424 00:18:02.049 killing process with pid 80424 00:18:02.049 Received shutdown signal, test time was about 2.000000 seconds 00:18:02.049 00:18:02.049 Latency(us) 00:18:02.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.049 =================================================================================================================== 00:18:02.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80424' 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80424 00:18:02.049 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80424 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80211 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80211 ']' 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80211 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80211 00:18:02.308 killing process with pid 80211 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80211' 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80211 00:18:02.308 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80211 00:18:02.567 ************************************ 00:18:02.567 END TEST nvmf_digest_clean 00:18:02.567 ************************************ 00:18:02.567 00:18:02.567 real 0m18.961s 00:18:02.567 user 0m36.874s 00:18:02.567 sys 0m4.656s 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:02.567 ************************************ 00:18:02.567 START TEST nvmf_digest_error 00:18:02.567 ************************************ 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80509 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80509 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80509 ']' 00:18:02.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.567 16:32:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.567 [2024-07-15 16:32:48.004114] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:02.567 [2024-07-15 16:32:48.004195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.826 [2024-07-15 16:32:48.139014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.826 [2024-07-15 16:32:48.252352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.826 [2024-07-15 16:32:48.252408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.826 [2024-07-15 16:32:48.252435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.826 [2024-07-15 16:32:48.252443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.826 [2024-07-15 16:32:48.252450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.826 [2024-07-15 16:32:48.252475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.393 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.393 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:03.393 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.393 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:03.393 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.652 [2024-07-15 16:32:48.981070] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.652 16:32:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.652 [2024-07-15 16:32:49.045535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:03.652 null0 00:18:03.652 [2024-07-15 16:32:49.097887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.652 [2024-07-15 16:32:49.122048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80541 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80541 /var/tmp/bperf.sock 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80541 ']' 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:03.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.652 16:32:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.652 [2024-07-15 16:32:49.175890] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:03.652 [2024-07-15 16:32:49.176228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80541 ] 00:18:03.910 [2024-07-15 16:32:49.311695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.168 [2024-07-15 16:32:49.467740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.168 [2024-07-15 16:32:49.527481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:04.734 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.734 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:04.734 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.734 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.993 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:04.993 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.993 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:04.993 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.993 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.993 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:05.561 nvme0n1 00:18:05.561 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:05.561 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.561 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.561 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.561 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:05.561 16:32:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:05.561 Running I/O for 2 seconds... 00:18:05.561 [2024-07-15 16:32:50.992590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:50.992680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:50.992695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.561 [2024-07-15 16:32:51.009466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:51.009518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:51.009547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.561 [2024-07-15 16:32:51.025821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:51.025886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:51.025916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.561 [2024-07-15 16:32:51.042146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:51.042185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:51.042215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.561 [2024-07-15 16:32:51.058126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:51.058162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:51.058191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.561 [2024-07-15 16:32:51.073988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:51.074023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:51.074052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.561 [2024-07-15 16:32:51.089261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:51.089299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:51.089329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.561 [2024-07-15 16:32:51.105033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.561 [2024-07-15 16:32:51.105131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.561 [2024-07-15 16:32:51.105162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.122566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.122602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.122631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.138374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.138412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.138440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.155340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.155380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.155394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.171960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.171996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.172025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.188574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.188632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.188662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.204473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.204511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.204539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.220558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.220596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.220610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.236149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.236188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.236217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.252720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.252758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.252787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.268654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.268692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.268722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.285094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.285139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.821 [2024-07-15 16:32:51.285153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.821 [2024-07-15 16:32:51.301904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.821 [2024-07-15 16:32:51.301942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.822 [2024-07-15 16:32:51.301972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.822 [2024-07-15 16:32:51.319144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.822 [2024-07-15 16:32:51.319191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.822 [2024-07-15 16:32:51.319221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.822 [2024-07-15 16:32:51.336730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.822 [2024-07-15 16:32:51.336803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.822 [2024-07-15 16:32:51.336818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.822 [2024-07-15 16:32:51.353703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.822 [2024-07-15 16:32:51.353742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.822 [2024-07-15 16:32:51.353756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.822 [2024-07-15 16:32:51.370987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:05.822 [2024-07-15 16:32:51.371024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.822 [2024-07-15 16:32:51.371068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.081 [2024-07-15 16:32:51.388711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.388752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.388767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.406337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.406377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.406392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.423639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.423678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.423709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.439752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.439789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.439819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.456565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.456603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.456633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.474242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.474305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.474334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.490682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.490723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.490742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.508229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.508265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.508309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.524989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.525024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.525081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.541035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.541147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.541163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.556939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.556997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.557027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.572547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.572584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.572613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.588012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.588047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.588075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.603351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.603386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.603414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.082 [2024-07-15 16:32:51.618055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.082 [2024-07-15 16:32:51.618088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.082 [2024-07-15 16:32:51.618117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.341 [2024-07-15 16:32:51.633929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.341 [2024-07-15 16:32:51.633974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.341 [2024-07-15 16:32:51.634003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.341 [2024-07-15 16:32:51.649567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.341 [2024-07-15 16:32:51.649602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.341 [2024-07-15 16:32:51.649630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.341 [2024-07-15 16:32:51.666078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.341 [2024-07-15 16:32:51.666116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.341 [2024-07-15 16:32:51.666130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.341 [2024-07-15 16:32:51.683202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.341 [2024-07-15 16:32:51.683242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.341 [2024-07-15 16:32:51.683255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.341 [2024-07-15 16:32:51.700737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.341 [2024-07-15 16:32:51.700804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.700834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.718243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.718278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.718307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.736132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.736199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.736229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.754327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.754399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.754430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.772293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.772383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.772413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.788702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.788739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.788769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.805188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.805226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.805241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.822515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.822552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.839196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.839234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.839248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.855953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.855992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.856022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.873601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.873665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.873691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.342 [2024-07-15 16:32:51.891522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.342 [2024-07-15 16:32:51.891584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.342 [2024-07-15 16:32:51.891615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:51.908234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:51.908270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:51.908315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:51.924867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:51.924942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:51.924957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:51.940518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:51.940553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:51.940581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:51.956033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:51.956069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:51.956098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:51.972125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:51.972165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:51.972179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:51.988987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:51.989022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:51.989077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:52.004516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:52.004552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:52.004580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:52.020217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:52.020253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:52.020297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:52.043633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:52.043671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:52.043701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:52.059739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.601 [2024-07-15 16:32:52.059978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.601 [2024-07-15 16:32:52.060015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.601 [2024-07-15 16:32:52.075466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.602 [2024-07-15 16:32:52.075504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.602 [2024-07-15 16:32:52.075532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.602 [2024-07-15 16:32:52.090930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.602 [2024-07-15 16:32:52.090965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.602 [2024-07-15 16:32:52.090995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.602 [2024-07-15 16:32:52.106647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.602 [2024-07-15 16:32:52.106684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.602 [2024-07-15 16:32:52.106713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.602 [2024-07-15 16:32:52.123214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.602 [2024-07-15 16:32:52.123251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.602 [2024-07-15 16:32:52.123281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.602 [2024-07-15 16:32:52.140765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.602 [2024-07-15 16:32:52.140804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.602 [2024-07-15 16:32:52.140818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.158583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.158636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.158649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.174634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.174671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.174700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.191191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.191227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.191255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.206621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.206657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.206687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.222154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.222189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.222218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.239011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.239047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.239076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.255511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.255546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.255575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.271147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.271182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.271211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.287242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.287278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.287308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.304291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.304345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.304374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.320330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.320366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.320395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.336133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.336170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.336199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.860 [2024-07-15 16:32:52.352459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.860 [2024-07-15 16:32:52.352496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.860 [2024-07-15 16:32:52.352532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.861 [2024-07-15 16:32:52.368140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.861 [2024-07-15 16:32:52.368193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.861 [2024-07-15 16:32:52.368207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.861 [2024-07-15 16:32:52.385301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.861 [2024-07-15 16:32:52.385357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.861 [2024-07-15 16:32:52.385371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.861 [2024-07-15 16:32:52.402343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:06.861 [2024-07-15 16:32:52.402389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.861 [2024-07-15 16:32:52.402403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.419552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.419605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.419635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.436575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.436614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.436629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.453222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.453261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.453275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.470145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.470182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.470196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.487142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.487181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.487195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.503949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.503987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.504016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.520689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.520741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.520771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.537381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.537434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.537463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.554451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.554490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.554530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.571453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.571491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.571520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.588034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.588074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.588104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.604818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.604871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.604885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.621870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.621921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.621951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.638611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.638651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.638682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.120 [2024-07-15 16:32:52.655321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.120 [2024-07-15 16:32:52.655358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.120 [2024-07-15 16:32:52.655387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.379 [2024-07-15 16:32:52.672404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.379 [2024-07-15 16:32:52.672466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.379 [2024-07-15 16:32:52.672481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.379 [2024-07-15 16:32:52.689569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.689636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.689652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.706908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.706962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.706976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.724193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.724232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.724246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.741770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.741837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.741852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.759131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.759198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.759214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.776392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.776428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.776458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.793419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.793460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.793474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.810469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.810510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.810524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.827447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.827484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.827498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.844460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.844499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.844513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.861354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.861460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.861489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.878946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.879011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.879041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.895892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.895966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.895980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.380 [2024-07-15 16:32:52.913395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.380 [2024-07-15 16:32:52.913432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.380 [2024-07-15 16:32:52.913446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.639 [2024-07-15 16:32:52.931581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.639 [2024-07-15 16:32:52.931617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.639 [2024-07-15 16:32:52.931646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.639 [2024-07-15 16:32:52.948610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.639 [2024-07-15 16:32:52.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.639 [2024-07-15 16:32:52.948678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.639 [2024-07-15 16:32:52.965961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd2d020) 00:18:07.639 [2024-07-15 16:32:52.965996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.639 [2024-07-15 16:32:52.966026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.639 00:18:07.639 Latency(us) 00:18:07.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.639 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:07.639 nvme0n1 : 2.00 15151.67 59.19 0.00 0.00 8441.61 7208.96 31218.97 00:18:07.639 =================================================================================================================== 00:18:07.639 Total : 15151.67 59.19 0.00 0.00 8441.61 7208.96 31218.97 00:18:07.639 0 00:18:07.639 16:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:07.639 16:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:07.639 16:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:07.639 | .driver_specific 00:18:07.639 | .nvme_error 00:18:07.639 | .status_code 00:18:07.639 | .command_transient_transport_error' 00:18:07.639 16:32:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80541 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80541 ']' 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80541 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80541 00:18:07.897 killing process with pid 80541 00:18:07.897 Received shutdown signal, test time was about 2.000000 seconds 00:18:07.897 00:18:07.897 Latency(us) 00:18:07.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.897 =================================================================================================================== 00:18:07.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80541' 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80541 00:18:07.897 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80541 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80601 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80601 /var/tmp/bperf.sock 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80601 ']' 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:08.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.156 16:32:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:08.156 [2024-07-15 16:32:53.536661] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:08.156 [2024-07-15 16:32:53.537001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80601 ] 00:18:08.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:08.156 Zero copy mechanism will not be used. 00:18:08.156 [2024-07-15 16:32:53.677704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.415 [2024-07-15 16:32:53.801279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.415 [2024-07-15 16:32:53.860564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:08.983 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.983 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:08.983 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.983 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:09.242 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:09.242 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.242 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.242 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.242 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.242 16:32:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.501 nvme0n1 00:18:09.501 16:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:09.501 16:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.501 16:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.501 16:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.501 16:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:09.501 16:32:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:09.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:09.761 Zero copy mechanism will not be used. 00:18:09.761 Running I/O for 2 seconds... 00:18:09.761 [2024-07-15 16:32:55.139339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.139390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.139421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.143626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.143664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.143694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.147975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.148016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.148029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.152324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.152363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.152393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.156634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.156672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.156701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.160922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.160960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.160989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.165110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.165150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.165164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.169219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.169258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.169272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.173370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.173423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.173452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.177745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.177782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.177810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.182075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.182115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.182145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.186643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.186684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.186713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.191187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.191242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.191272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.195703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.195757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.195787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.200035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.200071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.200100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.204390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.204428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.204459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.208927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.208980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.208994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.213467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.213507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.213521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.218040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.218104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.222614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.222655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.222669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.227112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.227149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.227178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.231460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.231500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.231514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.235975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.236013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.236042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.240325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.240362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.240391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.244563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.244600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.244628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.761 [2024-07-15 16:32:55.248805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.761 [2024-07-15 16:32:55.248841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.761 [2024-07-15 16:32:55.248881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.253313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.253356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.253370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.257942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.257994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.258023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.262382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.262421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.262449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.266607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.266645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.266674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.270899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.270936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.270965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.275016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.275052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.275081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.279254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.279291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.279319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.283382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.283418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.283447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.287913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.287952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.287981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.292379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.292419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.292447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.296552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.296591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.296620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.300678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.300715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.300743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.304948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.304983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.305012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.762 [2024-07-15 16:32:55.309513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:09.762 [2024-07-15 16:32:55.309579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.762 [2024-07-15 16:32:55.309610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.314187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.314227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.314256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.318896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.318953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.318975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.323309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.323348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.323377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.327712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.327751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.327780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.332332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.332370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.332401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.336769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.336825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.336838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.341334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.341405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.341418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.345934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.345988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.346003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.350168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.350250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.354659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.354697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.354725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.359354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.359398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.359413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.363955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.364010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.364041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.368412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.022 [2024-07-15 16:32:55.368455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.022 [2024-07-15 16:32:55.368469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.022 [2024-07-15 16:32:55.372860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.372926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.372956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.377281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.377323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.377337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.381703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.381744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.381774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.386120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.386167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.386181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.390424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.390465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.390479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.394789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.394837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.394867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.399228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.399267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.399280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.403539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.403579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.403592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.408329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.408372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.408387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.413214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.413257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.413271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.417779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.417823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.417838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.422152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.422208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.422238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.426508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.426547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.430897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.430935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.430965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.435414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.435456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.435471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.439829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.439883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.439914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.444274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.444314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.444344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.448647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.448689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.448718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.453041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.453107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.453122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.457322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.457363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.457377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.461751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.461790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.461818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.466162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.466200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.466230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.470623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.470662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.470691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.475105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.475143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.475171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.479494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.479545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.479574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.483906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.483944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.483974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.488196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.488234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.488263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.492567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.492605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.492634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.496925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.023 [2024-07-15 16:32:55.496964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.023 [2024-07-15 16:32:55.496994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.023 [2024-07-15 16:32:55.501199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.501238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.501252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.505299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.505341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.505366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.509673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.509726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.509755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.514018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.514071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.514085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.518240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.518309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.518346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.522711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.522767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.522781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.527228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.527280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.527309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.531630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.531682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.531711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.535938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.535990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.536020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.540241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.540294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.540323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.544700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.544757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.544772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.549315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.549369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.549384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.553992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.554034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.554049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.558575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.558634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.558665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.563201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.563251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.563266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.024 [2024-07-15 16:32:55.567639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.024 [2024-07-15 16:32:55.567685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.024 [2024-07-15 16:32:55.567704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.284 [2024-07-15 16:32:55.572499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.284 [2024-07-15 16:32:55.572551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.284 [2024-07-15 16:32:55.572566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.284 [2024-07-15 16:32:55.576952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.284 [2024-07-15 16:32:55.577008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.284 [2024-07-15 16:32:55.577024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.284 [2024-07-15 16:32:55.581350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.284 [2024-07-15 16:32:55.581394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.284 [2024-07-15 16:32:55.581408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.284 [2024-07-15 16:32:55.585904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.284 [2024-07-15 16:32:55.585956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.284 [2024-07-15 16:32:55.585971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.284 [2024-07-15 16:32:55.590315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.284 [2024-07-15 16:32:55.590356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.590370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.594627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.594667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.594681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.599046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.599088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.599102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.603339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.603393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.603424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.607761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.607816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.607846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.612297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.612340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.612354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.616855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.616919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.616950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.621423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.621487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.621507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.626074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.626129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.626159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.630339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.630395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.630425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.634592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.634646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.634675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.639073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.639113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.639127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.643428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.643482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.643512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.647883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.647936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.647951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.652109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.652176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.652206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.656395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.656435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.656449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.660634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.660706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.660734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.665092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.665146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.665160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.669631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.669687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.669716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.674040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.674095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.674124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.678531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.678572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.678586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.682901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.682943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.682957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.687338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.687379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.687392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.691635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.691676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.691691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.696031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.696070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.696084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.700292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.700332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.700346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.704679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.704720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.704734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.709097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.709137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.709151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.713473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.713528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.713559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.717924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.718001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.718031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.722215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.722269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.722298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-15 16:32:55.726470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.285 [2024-07-15 16:32:55.726523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.285 [2024-07-15 16:32:55.726552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.730908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.730962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.730992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.735198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.735252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.735266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.739505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.739559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.739588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.743773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.743826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.743855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.747965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.748017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.748047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.752431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.752474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.752489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.757116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.757158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.757173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.761623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.761682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.761696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.766133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.766189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.766202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.770488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.770530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.770543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.775110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.775149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.775163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.779627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.779684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.779698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.783982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.784037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.784068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.788495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.788541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.788556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.792942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.792997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.793027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.797198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.797239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.797253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.801500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.801571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.801585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.806284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.806361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.806376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.810760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.810804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.810818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.815220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.815295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.815310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.819726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.819771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.819785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.824265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.824334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.824365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.828875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.828916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.828930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.286 [2024-07-15 16:32:55.833494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.286 [2024-07-15 16:32:55.833542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.286 [2024-07-15 16:32:55.833557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.838140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.838229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.838260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.842998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.843040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.843055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.847646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.847703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.847733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.852259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.852298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.852311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.856591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.856634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.856648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.861176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.861223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.861237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.865747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.865802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.865831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.870341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.870382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.870396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.874755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.874825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.874861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.879193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.879233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.879247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.883523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.883562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.883577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.887896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.887935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.887949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.892198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.892249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.892264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.896558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.896599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.896613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.900935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.900989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.901003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.905305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.905345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.905359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.910014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.910053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.910082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.914334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.914404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.914418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.919092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.919144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.919158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.923617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.923657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.923671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.928293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.928369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.928384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.933335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.933380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.933394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.938164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.938222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.938253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.942730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.942831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.947455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.947501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.947516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.952339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.952410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.952424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.957107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.957147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.957161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.961739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.547 [2024-07-15 16:32:55.961792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.547 [2024-07-15 16:32:55.961821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.547 [2024-07-15 16:32:55.966209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.966251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.966265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:55.970898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.970936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.970961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:55.975493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.975538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.975552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:55.980232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.980305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.980319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:55.984870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.984935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.984965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:55.989507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.989552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.989568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:55.993973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.994015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.994029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:55.998429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:55.998472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:55.998486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.002878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.002945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.002960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.007457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.007498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.007512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.011979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.012037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.012052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.016502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.016547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.020992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.021034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.021061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.025374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.025426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.025440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.029796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.029838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.029852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.034229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.034284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.034298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.038699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.038755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.038770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.043112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.043152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.043166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.047379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.047434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.047448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.051825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.051893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.051908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.056176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.056231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.056245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.060590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.060631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.060645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.065062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.065106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.069430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.069470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.069483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.073791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.073845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.073869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.078263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.078317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.078346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.082680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.082737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.082750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.087215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.087256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.087270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.548 [2024-07-15 16:32:56.091676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.548 [2024-07-15 16:32:56.091735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.548 [2024-07-15 16:32:56.091749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.096417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.096474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.096496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.100804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.100888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.100914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.105646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.105703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.105726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.111102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.111159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.111178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.116601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.116657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.116679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.122059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.122128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.122143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.126929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.126986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.127010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.132345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.132402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.132424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.137848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.137913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.137929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.142329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.142387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.142401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.146960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.147017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.147032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.151394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.151450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.151464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.156496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.156555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.156574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.162047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.162109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.162133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.166708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.166752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.166766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.171180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.171237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.171267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.175752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.175796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.175811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.180224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.180266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.180281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.184670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.810 [2024-07-15 16:32:56.184711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.810 [2024-07-15 16:32:56.184725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.810 [2024-07-15 16:32:56.189207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.189249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.189264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.193529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.193575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.193590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.198030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.198073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.198088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.202360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.202425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.202454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.206754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.206810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.206839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.211052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.211106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.211135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.215567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.215611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.215625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.219987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.220041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.220055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.224557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.224600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.224623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.229183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.229224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.229238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.233794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.233840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.233868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.238421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.238464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.238479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.242984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.243038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.243068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.247455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.247521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.247535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.252128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.252197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.252211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.256614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.256654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.256668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.261168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.261210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.261230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.265578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.265619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.265634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.270305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.270351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.270366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.275013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.275056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.275070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.279503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.279545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.279559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.284135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.284206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.284221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.288934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.288986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.289000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.293550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.293605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.293619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.298208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.298254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.298276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.302885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.302938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.302954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.307430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.307485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.307514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.311959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.311998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.312013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.811 [2024-07-15 16:32:56.316535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.811 [2024-07-15 16:32:56.316590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.811 [2024-07-15 16:32:56.316620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.321255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.321298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.321323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.325810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.325866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.325882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.330471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.330515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.330529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.335077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.335117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.335131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.339638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.339678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.339693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.344054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.344094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.344108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.348506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.348546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.348560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.353073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.353113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.353127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.812 [2024-07-15 16:32:56.357578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:10.812 [2024-07-15 16:32:56.357621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.812 [2024-07-15 16:32:56.357635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.362256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.362317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.362341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.366761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.366804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.366820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.371315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.371371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.371385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.375872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.375956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.375971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.381605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.381655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.381670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.386042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.386083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.386097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.390578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.390619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.390634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.395202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.395256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.395287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.399714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.399785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.399799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.404214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.404268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.404298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.408696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.408751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.408780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.413215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.413255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.413270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.417585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.417653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.417682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.422074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.422127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.422141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.426499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.426553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.426567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.430946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.431010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.431023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.435424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.435477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.435507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.439865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.439927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.439957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.444525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.444582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.444612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.449281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.449331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.449346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.454128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.454188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.454203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.458299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.458374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.458403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.462792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.462849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.462892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.467255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.467311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.467340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.471713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.471768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.471797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.476124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.072 [2024-07-15 16:32:56.476177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.072 [2024-07-15 16:32:56.476207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.072 [2024-07-15 16:32:56.480595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.480650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.480679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.485034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.485101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.485116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.489267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.489307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.489320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.494113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.494172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.494187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.498831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.498907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.498953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.503235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.503290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.503320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.507641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.507695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.507724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.512013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.512067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.512098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.516579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.516623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.516638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.521215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.521264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.521279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.525983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.526031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.526045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.530792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.530849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.530877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.535381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.539774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.539814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.539828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.544201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.544240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.544255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.548653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.548694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.548709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.553190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.553230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.553245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.557653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.557709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.557723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.562122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.562178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.562192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.566597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.566647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.566660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.571142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.571183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.571197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.575665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.575710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.575725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.580104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.580162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.580176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.584559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.584600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.584614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.588973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.589012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.589026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.593395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.593436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.593450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.597779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.597819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.597834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.602154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.602193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.602207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.606458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.606499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.073 [2024-07-15 16:32:56.606513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.073 [2024-07-15 16:32:56.610897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.073 [2024-07-15 16:32:56.610936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.074 [2024-07-15 16:32:56.610950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.074 [2024-07-15 16:32:56.615179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.074 [2024-07-15 16:32:56.615219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.074 [2024-07-15 16:32:56.615233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.074 [2024-07-15 16:32:56.619531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.074 [2024-07-15 16:32:56.619577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.074 [2024-07-15 16:32:56.619591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.333 [2024-07-15 16:32:56.623982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.333 [2024-07-15 16:32:56.624025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.333 [2024-07-15 16:32:56.624039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.333 [2024-07-15 16:32:56.628465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.333 [2024-07-15 16:32:56.628514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.333 [2024-07-15 16:32:56.628529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.333 [2024-07-15 16:32:56.632795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.333 [2024-07-15 16:32:56.632836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.333 [2024-07-15 16:32:56.632850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.333 [2024-07-15 16:32:56.637230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.333 [2024-07-15 16:32:56.637290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.637307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.641624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.641664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.641678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.645999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.646044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.646058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.650374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.650414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.650429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.654744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.654784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.654798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.659028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.659072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.659086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.663288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.663328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.663345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.667657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.667698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.667711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.672014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.672054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.676289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.676328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.676342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.680707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.680748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.680762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.685130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.685172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.685186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.689397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.689436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.689450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.693672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.693711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.693725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.698035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.698074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.698088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.702306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.702345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.702359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.706749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.706792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.706806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.711264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.711308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.711322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.715719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.715777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.715791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.720075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.720130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.720145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.724372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.724412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.724427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.728645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.728701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.728715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.733148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.733188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.733202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.737785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.737873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.737889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.742452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.742511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.742525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.746828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.746918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.746935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.751417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.751492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.751507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.756296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.756346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.756361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.760862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.760945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.334 [2024-07-15 16:32:56.760959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.334 [2024-07-15 16:32:56.765131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.334 [2024-07-15 16:32:56.765173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.765187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.769318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.769358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.769372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.773773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.773816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.773831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.778431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.778480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.778495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.782969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.783011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.783026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.787415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.787457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.787472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.791825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.791878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.796232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.796273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.796288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.800797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.800849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.800876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.805472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.805543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.805558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.810055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.810111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.810126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.814622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.814663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.814677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.819108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.819165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.819180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.823622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.823678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.823692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.828092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.828149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.828163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.832520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.832600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.832614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.837152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.837194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.837209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.841709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.841758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.841773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.846187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.846230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.846245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.850722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.850763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.850777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.855117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.855157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.859557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.859597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.859611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.864006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.864045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.864065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.868401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.868442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.868456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.872780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.872830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.872851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.877119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.877159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.877173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.335 [2024-07-15 16:32:56.881732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.335 [2024-07-15 16:32:56.881778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.335 [2024-07-15 16:32:56.881793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.886400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.886449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.596 [2024-07-15 16:32:56.886465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.890931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.890979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.596 [2024-07-15 16:32:56.890993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.895363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.895408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.596 [2024-07-15 16:32:56.895422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.899845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.899898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.596 [2024-07-15 16:32:56.899912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.904313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.904354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.596 [2024-07-15 16:32:56.904369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.908664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.908717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.596 [2024-07-15 16:32:56.908731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.913008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.913056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.596 [2024-07-15 16:32:56.913071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.596 [2024-07-15 16:32:56.917494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.596 [2024-07-15 16:32:56.917534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.921851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.921902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.921916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.926245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.926285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.926300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.930555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.930595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.930609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.935032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.935071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.939390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.939429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.943740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.943779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.943794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.948185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.948225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.948239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.952623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.952664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.952677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.957079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.957119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.957133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.961415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.961455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.961469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.965878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.965917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.965931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.970323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.970363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.970378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.974642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.974681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.974695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.979045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.979086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.979100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.983336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.983376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.983390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.987610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.987649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.987663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.992021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.992060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.992074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:56.996568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:56.996611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:56.996625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.001028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.001081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.001095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.005350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.005393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.005407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.009811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.009876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.009893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.014286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.014338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.014352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.018735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.018778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.018792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.023209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.023252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.023267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.027620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.027661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.032095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.032138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.032152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.036552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.036608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.036629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.041020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.597 [2024-07-15 16:32:57.041074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.597 [2024-07-15 16:32:57.041089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.597 [2024-07-15 16:32:57.045556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.045596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.045610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.050062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.050103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.050116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.054390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.054434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.054448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.058756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.058796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.058810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.063120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.063159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.063173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.067458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.067498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.067511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.071855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.071922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.071937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.076108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.076149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.076162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.080511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.080554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.080569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.084973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.085016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.085030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.089415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.089457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.089482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.093820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.093875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.098259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.098299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.098313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.102678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.102732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.102763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.107169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.107224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.107238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.111589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.111642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.111672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.115953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.116008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.116037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.120584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.120627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.120641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.125270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.125314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.125328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.129914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.129984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.130015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.598 [2024-07-15 16:32:57.134337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d6ac0) 00:18:11.598 [2024-07-15 16:32:57.134392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.598 [2024-07-15 16:32:57.134421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.598 00:18:11.598 Latency(us) 00:18:11.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:11.598 nvme0n1 : 2.00 6913.44 864.18 0.00 0.00 2310.81 1884.16 5600.35 00:18:11.598 =================================================================================================================== 00:18:11.598 Total : 6913.44 864.18 0.00 0.00 2310.81 1884.16 5600.35 00:18:11.598 0 00:18:11.857 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:11.857 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:11.857 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:11.857 | .driver_specific 00:18:11.857 | .nvme_error 00:18:11.857 | .status_code 00:18:11.857 | .command_transient_transport_error' 00:18:11.857 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:12.116 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 446 > 0 )) 00:18:12.116 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80601 00:18:12.116 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80601 ']' 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80601 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80601 00:18:12.117 killing process with pid 80601 00:18:12.117 Received shutdown signal, test time was about 2.000000 seconds 00:18:12.117 00:18:12.117 Latency(us) 00:18:12.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.117 =================================================================================================================== 00:18:12.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80601' 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80601 00:18:12.117 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80601 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80656 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80656 /var/tmp/bperf.sock 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80656 ']' 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:12.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.375 16:32:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:12.375 [2024-07-15 16:32:57.791009] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:12.375 [2024-07-15 16:32:57.791134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80656 ] 00:18:12.634 [2024-07-15 16:32:57.934613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.634 [2024-07-15 16:32:58.051349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.634 [2024-07-15 16:32:58.108251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.669 16:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.669 16:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:13.669 16:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:13.669 16:32:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:13.669 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:13.669 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.669 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.669 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.669 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.669 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.932 nvme0n1 00:18:13.932 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:13.932 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.932 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.932 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.932 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:13.932 16:32:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:14.193 Running I/O for 2 seconds... 00:18:14.193 [2024-07-15 16:32:59.572341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fef90 00:18:14.193 [2024-07-15 16:32:59.574931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.574984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.588381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190feb58 00:18:14.193 [2024-07-15 16:32:59.590888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.590930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.604142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fe2e8 00:18:14.193 [2024-07-15 16:32:59.606608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.606647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.619977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fda78 00:18:14.193 [2024-07-15 16:32:59.622433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.635883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fd208 00:18:14.193 [2024-07-15 16:32:59.638361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.638407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.651829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fc998 00:18:14.193 [2024-07-15 16:32:59.654247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.654291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.667604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fc128 00:18:14.193 [2024-07-15 16:32:59.669997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.670034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.683337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fb8b8 00:18:14.193 [2024-07-15 16:32:59.685699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.685735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.699289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fb048 00:18:14.193 [2024-07-15 16:32:59.701682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.701730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.715523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fa7d8 00:18:14.193 [2024-07-15 16:32:59.717960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.718005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:14.193 [2024-07-15 16:32:59.731814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f9f68 00:18:14.193 [2024-07-15 16:32:59.734189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.193 [2024-07-15 16:32:59.734232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.748191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f96f8 00:18:14.453 [2024-07-15 16:32:59.750489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.750533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.763970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f8e88 00:18:14.453 [2024-07-15 16:32:59.766229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.766269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.779662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f8618 00:18:14.453 [2024-07-15 16:32:59.781912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.781950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.795441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f7da8 00:18:14.453 [2024-07-15 16:32:59.797671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.797710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.811236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f7538 00:18:14.453 [2024-07-15 16:32:59.813463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.813503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.827331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f6cc8 00:18:14.453 [2024-07-15 16:32:59.829568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.829611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.843444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f6458 00:18:14.453 [2024-07-15 16:32:59.845637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.845683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.859236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f5be8 00:18:14.453 [2024-07-15 16:32:59.861411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.861448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.875020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f5378 00:18:14.453 [2024-07-15 16:32:59.877158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.877198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.890874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f4b08 00:18:14.453 [2024-07-15 16:32:59.892990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.893031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.906657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f4298 00:18:14.453 [2024-07-15 16:32:59.908748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.908785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.922445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f3a28 00:18:14.453 [2024-07-15 16:32:59.924511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.924548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.938128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f31b8 00:18:14.453 [2024-07-15 16:32:59.940172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.940208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.953814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f2948 00:18:14.453 [2024-07-15 16:32:59.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.955882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.969602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f20d8 00:18:14.453 [2024-07-15 16:32:59.971615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:32:59.985561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f1868 00:18:14.453 [2024-07-15 16:32:59.987609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.453 [2024-07-15 16:32:59.987651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:14.453 [2024-07-15 16:33:00.001759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f0ff8 00:18:14.712 [2024-07-15 16:33:00.003766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.003810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.017794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f0788 00:18:14.712 [2024-07-15 16:33:00.019744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.019784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.033555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eff18 00:18:14.712 [2024-07-15 16:33:00.035498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.035540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.049430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ef6a8 00:18:14.712 [2024-07-15 16:33:00.051354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.051393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.065416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eee38 00:18:14.712 [2024-07-15 16:33:00.067308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.067347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.081292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ee5c8 00:18:14.712 [2024-07-15 16:33:00.083216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.083259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.097329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190edd58 00:18:14.712 [2024-07-15 16:33:00.099237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.099280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.113466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ed4e8 00:18:14.712 [2024-07-15 16:33:00.115349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.115390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.129270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ecc78 00:18:14.712 [2024-07-15 16:33:00.131094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.131131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.145111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ec408 00:18:14.712 [2024-07-15 16:33:00.146903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.146949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.160795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ebb98 00:18:14.712 [2024-07-15 16:33:00.162611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.712 [2024-07-15 16:33:00.162649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:14.712 [2024-07-15 16:33:00.176578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eb328 00:18:14.713 [2024-07-15 16:33:00.178357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.713 [2024-07-15 16:33:00.178395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:14.713 [2024-07-15 16:33:00.192308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eaab8 00:18:14.713 [2024-07-15 16:33:00.194062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.713 [2024-07-15 16:33:00.194099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:14.713 [2024-07-15 16:33:00.208194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ea248 00:18:14.713 [2024-07-15 16:33:00.209990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.713 [2024-07-15 16:33:00.210034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:14.713 [2024-07-15 16:33:00.224455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e99d8 00:18:14.713 [2024-07-15 16:33:00.226218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.713 [2024-07-15 16:33:00.226260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:14.713 [2024-07-15 16:33:00.240325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e9168 00:18:14.713 [2024-07-15 16:33:00.242010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.713 [2024-07-15 16:33:00.242047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:14.713 [2024-07-15 16:33:00.256030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e88f8 00:18:14.713 [2024-07-15 16:33:00.257690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.713 [2024-07-15 16:33:00.257725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.272013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e8088 00:18:14.972 [2024-07-15 16:33:00.273662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.273701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.287775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e7818 00:18:14.972 [2024-07-15 16:33:00.289420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.289459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.303688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e6fa8 00:18:14.972 [2024-07-15 16:33:00.305311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.305352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.319557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e6738 00:18:14.972 [2024-07-15 16:33:00.321153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.321194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.335330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e5ec8 00:18:14.972 [2024-07-15 16:33:00.336895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.336933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.351041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e5658 00:18:14.972 [2024-07-15 16:33:00.352572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.352610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.366751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e4de8 00:18:14.972 [2024-07-15 16:33:00.368282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.368317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.382444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e4578 00:18:14.972 [2024-07-15 16:33:00.383994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.972 [2024-07-15 16:33:00.384032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:14.972 [2024-07-15 16:33:00.398351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e3d08 00:18:14.973 [2024-07-15 16:33:00.399826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.399874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:14.973 [2024-07-15 16:33:00.414072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e3498 00:18:14.973 [2024-07-15 16:33:00.415536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.415576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:14.973 [2024-07-15 16:33:00.430229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e2c28 00:18:14.973 [2024-07-15 16:33:00.431736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:14.973 [2024-07-15 16:33:00.446254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e23b8 00:18:14.973 [2024-07-15 16:33:00.447679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.447717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:14.973 [2024-07-15 16:33:00.461966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e1b48 00:18:14.973 [2024-07-15 16:33:00.463369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.463406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:14.973 [2024-07-15 16:33:00.477668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e12d8 00:18:14.973 [2024-07-15 16:33:00.479061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.479098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:14.973 [2024-07-15 16:33:00.493368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e0a68 00:18:14.973 [2024-07-15 16:33:00.494732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.494770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:14.973 [2024-07-15 16:33:00.509318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e01f8 00:18:14.973 [2024-07-15 16:33:00.510698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.973 [2024-07-15 16:33:00.510740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.525284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190df988 00:18:15.232 [2024-07-15 16:33:00.526648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.526696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.541069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190df118 00:18:15.232 [2024-07-15 16:33:00.542375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.542413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.556764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190de8a8 00:18:15.232 [2024-07-15 16:33:00.558063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.558100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.572473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190de038 00:18:15.232 [2024-07-15 16:33:00.573749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.573786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.594828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190de038 00:18:15.232 [2024-07-15 16:33:00.597347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.597385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.610599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190de8a8 00:18:15.232 [2024-07-15 16:33:00.613067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.613103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.626411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190df118 00:18:15.232 [2024-07-15 16:33:00.628881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.628920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.642658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190df988 00:18:15.232 [2024-07-15 16:33:00.645173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.645221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.658631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e01f8 00:18:15.232 [2024-07-15 16:33:00.661043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.661091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.674356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e0a68 00:18:15.232 [2024-07-15 16:33:00.676752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.676791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.690509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e12d8 00:18:15.232 [2024-07-15 16:33:00.692936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.693001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.706630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e1b48 00:18:15.232 [2024-07-15 16:33:00.709027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.709079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.722514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e23b8 00:18:15.232 [2024-07-15 16:33:00.724831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.724877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.738309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e2c28 00:18:15.232 [2024-07-15 16:33:00.740676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.740720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.754556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e3498 00:18:15.232 [2024-07-15 16:33:00.756923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.756967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.232 [2024-07-15 16:33:00.770477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e3d08 00:18:15.232 [2024-07-15 16:33:00.772762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.232 [2024-07-15 16:33:00.772800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.491 [2024-07-15 16:33:00.786588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e4578 00:18:15.492 [2024-07-15 16:33:00.788866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.788908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.802347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e4de8 00:18:15.492 [2024-07-15 16:33:00.804579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.804619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.818438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e5658 00:18:15.492 [2024-07-15 16:33:00.820714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.820758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.834540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e5ec8 00:18:15.492 [2024-07-15 16:33:00.836762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.836802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.850340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e6738 00:18:15.492 [2024-07-15 16:33:00.852520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.852559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.866638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e6fa8 00:18:15.492 [2024-07-15 16:33:00.868878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.868924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.883062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e7818 00:18:15.492 [2024-07-15 16:33:00.885283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.885329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.899405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e8088 00:18:15.492 [2024-07-15 16:33:00.901584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.901626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.915393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e88f8 00:18:15.492 [2024-07-15 16:33:00.917522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.917562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.931213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e9168 00:18:15.492 [2024-07-15 16:33:00.933324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.933363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.946983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190e99d8 00:18:15.492 [2024-07-15 16:33:00.949071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.949108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.962745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ea248 00:18:15.492 [2024-07-15 16:33:00.964806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.964841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.978617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eaab8 00:18:15.492 [2024-07-15 16:33:00.980683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.980720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:00.994634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eb328 00:18:15.492 [2024-07-15 16:33:00.996653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:00.996701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:01.010947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ebb98 00:18:15.492 [2024-07-15 16:33:01.012995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:01.013040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.492 [2024-07-15 16:33:01.026865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ec408 00:18:15.492 [2024-07-15 16:33:01.028819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.492 [2024-07-15 16:33:01.028873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.042739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ecc78 00:18:15.751 [2024-07-15 16:33:01.044709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.751 [2024-07-15 16:33:01.044759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.058657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ed4e8 00:18:15.751 [2024-07-15 16:33:01.060617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.751 [2024-07-15 16:33:01.060658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.075120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190edd58 00:18:15.751 [2024-07-15 16:33:01.077081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.751 [2024-07-15 16:33:01.077127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.091217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ee5c8 00:18:15.751 [2024-07-15 16:33:01.093148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.751 [2024-07-15 16:33:01.093191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.107127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eee38 00:18:15.751 [2024-07-15 16:33:01.109021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.751 [2024-07-15 16:33:01.109069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.123384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190ef6a8 00:18:15.751 [2024-07-15 16:33:01.125314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.751 [2024-07-15 16:33:01.125363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.139320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190eff18 00:18:15.751 [2024-07-15 16:33:01.141157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.751 [2024-07-15 16:33:01.141196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.751 [2024-07-15 16:33:01.155118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f0788 00:18:15.752 [2024-07-15 16:33:01.156921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.156960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.170940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f0ff8 00:18:15.752 [2024-07-15 16:33:01.172715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.172754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.186520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f1868 00:18:15.752 [2024-07-15 16:33:01.188310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.188362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.201661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f20d8 00:18:15.752 [2024-07-15 16:33:01.203477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.203525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.216650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f2948 00:18:15.752 [2024-07-15 16:33:01.218525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.218586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.232278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f31b8 00:18:15.752 [2024-07-15 16:33:01.234106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.234167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.247682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f3a28 00:18:15.752 [2024-07-15 16:33:01.249508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.249546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.262646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f4298 00:18:15.752 [2024-07-15 16:33:01.264387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.264435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.277531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f4b08 00:18:15.752 [2024-07-15 16:33:01.279162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.279211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.752 [2024-07-15 16:33:01.292513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f5378 00:18:15.752 [2024-07-15 16:33:01.294207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.752 [2024-07-15 16:33:01.294256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.308958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f5be8 00:18:16.011 [2024-07-15 16:33:01.310685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.310744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.325411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f6458 00:18:16.011 [2024-07-15 16:33:01.327047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.327093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.341952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f6cc8 00:18:16.011 [2024-07-15 16:33:01.343660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.343703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.358443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f7538 00:18:16.011 [2024-07-15 16:33:01.360054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.360095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.374747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f7da8 00:18:16.011 [2024-07-15 16:33:01.376383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.376432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.391054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f8618 00:18:16.011 [2024-07-15 16:33:01.392586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.392627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.406994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f8e88 00:18:16.011 [2024-07-15 16:33:01.408508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.408548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.423723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f96f8 00:18:16.011 [2024-07-15 16:33:01.425303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.425351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.440298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190f9f68 00:18:16.011 [2024-07-15 16:33:01.441786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.441828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.456050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fa7d8 00:18:16.011 [2024-07-15 16:33:01.457563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.457606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.473216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fb048 00:18:16.011 [2024-07-15 16:33:01.474696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.474754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.489258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fb8b8 00:18:16.011 [2024-07-15 16:33:01.490667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.490720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.504909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fc128 00:18:16.011 [2024-07-15 16:33:01.506301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.506353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.520526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fc998 00:18:16.011 [2024-07-15 16:33:01.521914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.521951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:16.011 [2024-07-15 16:33:01.536236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278360) with pdu=0x2000190fd208 00:18:16.011 [2024-07-15 16:33:01.537593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.011 [2024-07-15 16:33:01.537637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:16.011 00:18:16.011 Latency(us) 00:18:16.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.011 nvme0n1 : 2.00 15856.80 61.94 0.00 0.00 8065.57 7089.80 30742.34 00:18:16.011 =================================================================================================================== 00:18:16.011 Total : 15856.80 61.94 0.00 0.00 8065.57 7089.80 30742.34 00:18:16.011 0 00:18:16.270 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:16.270 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:16.270 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:16.270 | .driver_specific 00:18:16.270 | .nvme_error 00:18:16.270 | .status_code 00:18:16.270 | .command_transient_transport_error' 00:18:16.270 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80656 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80656 ']' 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80656 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80656 00:18:16.542 killing process with pid 80656 00:18:16.542 Received shutdown signal, test time was about 2.000000 seconds 00:18:16.542 00:18:16.542 Latency(us) 00:18:16.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.542 =================================================================================================================== 00:18:16.542 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:16.542 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:16.543 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80656' 00:18:16.543 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80656 00:18:16.543 16:33:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80656 00:18:16.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80722 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80722 /var/tmp/bperf.sock 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80722 ']' 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.809 16:33:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:16.809 Zero copy mechanism will not be used. 00:18:16.809 [2024-07-15 16:33:02.166933] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:16.809 [2024-07-15 16:33:02.167035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80722 ] 00:18:16.810 [2024-07-15 16:33:02.303131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.068 [2024-07-15 16:33:02.420905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.068 [2024-07-15 16:33:02.475388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:17.635 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.635 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:17.635 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.635 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:17.893 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:17.893 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.893 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.893 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.893 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.893 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:18.459 nvme0n1 00:18:18.459 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:18.459 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.459 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.459 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.459 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:18.459 16:33:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:18.459 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:18.459 Zero copy mechanism will not be used. 00:18:18.459 Running I/O for 2 seconds... 00:18:18.459 [2024-07-15 16:33:03.987060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.459 [2024-07-15 16:33:03.987385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.459 [2024-07-15 16:33:03.987429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.459 [2024-07-15 16:33:03.992255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.459 [2024-07-15 16:33:03.992566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.459 [2024-07-15 16:33:03.992607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.459 [2024-07-15 16:33:03.997387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.459 [2024-07-15 16:33:03.997702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.459 [2024-07-15 16:33:03.997741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.459 [2024-07-15 16:33:04.002614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.459 [2024-07-15 16:33:04.002922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.459 [2024-07-15 16:33:04.002961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.459 [2024-07-15 16:33:04.007752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.459 [2024-07-15 16:33:04.008062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.459 [2024-07-15 16:33:04.008101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.717 [2024-07-15 16:33:04.013018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.717 [2024-07-15 16:33:04.013333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.717 [2024-07-15 16:33:04.013371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.717 [2024-07-15 16:33:04.018259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.717 [2024-07-15 16:33:04.018572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.717 [2024-07-15 16:33:04.018611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.717 [2024-07-15 16:33:04.023404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.023704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.023743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.028613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.028934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.028974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.034056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.034400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.034439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.039492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.039792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.039831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.044742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.045077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.045116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.049882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.050180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.050219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.055023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.055329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.055367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.060187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.060492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.060530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.065579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.065889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.065927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.070692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.071042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.071079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.076022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.076315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.076356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.081324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.081622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.081662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.086484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.086779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.086818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.091672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.091981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.092019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.096755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.097073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.097106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.101993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.102298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.102342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.107176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.107486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.107520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.112359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.112665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.112704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.117655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.117967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.118000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.122844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.123160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.123195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.128071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.128378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.128413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.133243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.133549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.133588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.138366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.138668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.138706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.143631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.143950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.144001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.148829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.149174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.149210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.154027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.154331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.154364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.159125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.159420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.159463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.164221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.164517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.164553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.169386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.169692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.169726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.174575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.174897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.174930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.179825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.180136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.180181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.185084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.185402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.185440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.190391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.190697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.190733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.195617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.195946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.196000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.200839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.201171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.201207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.206060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.206359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.206392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.211200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.211495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.211528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.216374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.216674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.216709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.221451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.221754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.221790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.226736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.227043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.231859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.232184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.232224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.237042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.237345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.237381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.242203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.242524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.242558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.247426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.247738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.247772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.252573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.252903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.252936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.257611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.257924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.257957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.718 [2024-07-15 16:33:04.262831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.718 [2024-07-15 16:33:04.263187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.718 [2024-07-15 16:33:04.263225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.268154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.978 [2024-07-15 16:33:04.268452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.978 [2024-07-15 16:33:04.268490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.273397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.978 [2024-07-15 16:33:04.273696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.978 [2024-07-15 16:33:04.273730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.278629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.978 [2024-07-15 16:33:04.278961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.978 [2024-07-15 16:33:04.278993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.283853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.978 [2024-07-15 16:33:04.284167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.978 [2024-07-15 16:33:04.284201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.289073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.978 [2024-07-15 16:33:04.289381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.978 [2024-07-15 16:33:04.289423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.294144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.978 [2024-07-15 16:33:04.294441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.978 [2024-07-15 16:33:04.294475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.299227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.978 [2024-07-15 16:33:04.299520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.978 [2024-07-15 16:33:04.299556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.978 [2024-07-15 16:33:04.304478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.304771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.304804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.309750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.310074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.310107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.314910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.315215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.315257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.320008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.320314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.320353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.325176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.325491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.325525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.330382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.330683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.330717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.335795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.336137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.336176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.341258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.341554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.341587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.346423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.346720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.346753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.351667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.351973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.356804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.357125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.357161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.361934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.362237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.362270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.367115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.367412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.367446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.372299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.372611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.372645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.377504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.377800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.377836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.382682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.382989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.387981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.388286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.388320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.393251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.393557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.393592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.398465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.398773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.398816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.403666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.403974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.404007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.408841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.409164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.409200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.414014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.414330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.414364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.419175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.419477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.419520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.424302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.424599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.424636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.429495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.429794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.429827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.434598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.434905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.434939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.439757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.440072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.440105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.444829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.445155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.445189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.449924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.979 [2024-07-15 16:33:04.450231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.979 [2024-07-15 16:33:04.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.979 [2024-07-15 16:33:04.455078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.455382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.455415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.460226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.460521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.460555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.465342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.465644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.465679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.470425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.470726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.470770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.475667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.475981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.476014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.480981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.481283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.481316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.486043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.486347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.486379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.491227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.491540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.491564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.496425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.496718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.496764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.501667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.501983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.502015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.506966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.507263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.507296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.512205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.512500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.512533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.517492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.517788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.517823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.522788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.523111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.523145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:18.980 [2024-07-15 16:33:04.528001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:18.980 [2024-07-15 16:33:04.528304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.980 [2024-07-15 16:33:04.528339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.533208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.533510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.533544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.538278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.538575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.538608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.543422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.543721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.543759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.548591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.548897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.548930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.553731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.554045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.554078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.558850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.559163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.559199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.564133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.564459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.564492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.569293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.569591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.569631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.574464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.574763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.574805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.579719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.580033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.580066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.584914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.585233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.585266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.590116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.590424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.590457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.595222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.595516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.595551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.600354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.600651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.600686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.605588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.605894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.605927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.610750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.611080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.611114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.615917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.616227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.616260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.621212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.621507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.621541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.626376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.626683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.626719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.631504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.631803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.631836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.636675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.256 [2024-07-15 16:33:04.636993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.256 [2024-07-15 16:33:04.637026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.256 [2024-07-15 16:33:04.641760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.642074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.642107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.646929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.647228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.647265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.652038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.652333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.652369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.657368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.657667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.657706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.662948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.663249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.663287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.668120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.668419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.668453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.673263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.673568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.673602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.678378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.678672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.678706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.683581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.683892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.683928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.689003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.689333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.689380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.694277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.694578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.694614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.699476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.699776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.699809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.704630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.704940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.704973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.709763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.710072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.710109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.714838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.715149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.715182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.719916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.720211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.720246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.725123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.725418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.725456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.730253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.730551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.730588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.735368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.735663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.735699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.740498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.740791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.740825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.745623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.745928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.745960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.750764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.751119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.756362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.756661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.756696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.761502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.761802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.761835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.766644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.766950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.766987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.771821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.772126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.772159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.777062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.777359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.777414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.782243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.782562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.782595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.787420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.257 [2024-07-15 16:33:04.787720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.257 [2024-07-15 16:33:04.787752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.257 [2024-07-15 16:33:04.792627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.258 [2024-07-15 16:33:04.792937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.258 [2024-07-15 16:33:04.792971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.258 [2024-07-15 16:33:04.797850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.258 [2024-07-15 16:33:04.798163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.258 [2024-07-15 16:33:04.798197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.258 [2024-07-15 16:33:04.803156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.258 [2024-07-15 16:33:04.803473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.258 [2024-07-15 16:33:04.803523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.808576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.808900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.808933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.813835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.814147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.814184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.819055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.819357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.819395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.824242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.824545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.824582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.829513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.829812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.829848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.834765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.835095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.835128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.840034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.840328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.840360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.845140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.845446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.845485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.850363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.850723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.855652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.855983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.856015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.860853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.861174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.861214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.866033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.866333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.866366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.871225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.871520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.871555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.876460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.876766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.876801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.881689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.882002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.882035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.886857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.887171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.887205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.892133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.892447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.892480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.897450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.897765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.897814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.902803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.903109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.903147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.908033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.908344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.908390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.913273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.913579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.913615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.918445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.918758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.918792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.923677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.923999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.924041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.928783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.929111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.929146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.933982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.934300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.934355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.939259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.939578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.939612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.944434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.944734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.944768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.949649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.949974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.950017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.954944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.955252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.955310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.960255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.960571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.960607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.965514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.965814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.965851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.970696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.971010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.971052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.975923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.976220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.976257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.981175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.981486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.981530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.986460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.986771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.986814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.991800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.992139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.992173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:04.996982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:04.997301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:04.997336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:05.002183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:05.002490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:05.002534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:05.007305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:05.007615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.520 [2024-07-15 16:33:05.007658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.520 [2024-07-15 16:33:05.012576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.520 [2024-07-15 16:33:05.012898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.012940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.017943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.018266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.018328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.023228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.023546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.023582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.028388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.028684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.028720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.033579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.033925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.033958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.038779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.039096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.039139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.044037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.044336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.044379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.049312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.049619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.049657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.054585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.054891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.054926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.059694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.060009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.060047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.521 [2024-07-15 16:33:05.064905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.521 [2024-07-15 16:33:05.065235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.521 [2024-07-15 16:33:05.065267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.070255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.070551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.070589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.075488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.075784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.075828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.080755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.081097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.081134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.086067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.086374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.086412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.091159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.091455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.091494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.096325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.096632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.096670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.101459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.101767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.101806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.106611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.106932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.106977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.111778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.112099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.112137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.116994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.117315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.117348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.122222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.122515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.122554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.127312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.127623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.127662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.132451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.132754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.132792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.137568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.137877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.137915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.142783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.143091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.143126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.788 [2024-07-15 16:33:05.147996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.788 [2024-07-15 16:33:05.148316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.788 [2024-07-15 16:33:05.148351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.153334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.153636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.153676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.158740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.159061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.159098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.164068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.164366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.164399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.169285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.169589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.169627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.174400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.174704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.174742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.179493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.179789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.179827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.184626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.184940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.184975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.189753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.190067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.190103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.194914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.195207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.195243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.200034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.200326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.200359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.205187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.205499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.205547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.210310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.210614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.210651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.215544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.215840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.215889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.220661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.220971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.221008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.226383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.226684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.226725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.231639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.231948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.231987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.236935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.237267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.237316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.242156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.242457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.242500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.247313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.247635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.247690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.252657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.252970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.253026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.257898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.258223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.258261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.263140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.263437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.263484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.268339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.268640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.268680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.273344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.273429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.273455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.278509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.278613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.278637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.283696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.283775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.283800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.288891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.288971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.288996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.294143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.294215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.294239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.789 [2024-07-15 16:33:05.299292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.789 [2024-07-15 16:33:05.299363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.789 [2024-07-15 16:33:05.299386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.790 [2024-07-15 16:33:05.304468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.790 [2024-07-15 16:33:05.304539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.790 [2024-07-15 16:33:05.304562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.790 [2024-07-15 16:33:05.309624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.790 [2024-07-15 16:33:05.309693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.790 [2024-07-15 16:33:05.309717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.790 [2024-07-15 16:33:05.314807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.790 [2024-07-15 16:33:05.314889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.790 [2024-07-15 16:33:05.314912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:19.790 [2024-07-15 16:33:05.319927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.790 [2024-07-15 16:33:05.320005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.790 [2024-07-15 16:33:05.320028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.790 [2024-07-15 16:33:05.325112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.790 [2024-07-15 16:33:05.325188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.790 [2024-07-15 16:33:05.325215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:19.790 [2024-07-15 16:33:05.330181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.790 [2024-07-15 16:33:05.330265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.790 [2024-07-15 16:33:05.330288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:19.790 [2024-07-15 16:33:05.335366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:19.790 [2024-07-15 16:33:05.335455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.790 [2024-07-15 16:33:05.335478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.061 [2024-07-15 16:33:05.340721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.061 [2024-07-15 16:33:05.340792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.061 [2024-07-15 16:33:05.340816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.061 [2024-07-15 16:33:05.346219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.061 [2024-07-15 16:33:05.346303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.061 [2024-07-15 16:33:05.346326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.061 [2024-07-15 16:33:05.351380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.061 [2024-07-15 16:33:05.351449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.061 [2024-07-15 16:33:05.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.061 [2024-07-15 16:33:05.356532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.061 [2024-07-15 16:33:05.356615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.061 [2024-07-15 16:33:05.356640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.061 [2024-07-15 16:33:05.361882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.362016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.362044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.367128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.367232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.367259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.372500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.372585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.372611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.377689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.377772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.377795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.382819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.382927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.382950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.387994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.388077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.388101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.393214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.393289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.393312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.398451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.398533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.398556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.403742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.403820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.403843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.409101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.409178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.409202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.414221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.414294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.414317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.419356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.419434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.419459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.424451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.424524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.424549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.429647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.429723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.429748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.434760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.434831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.434869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.440037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.440107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.440129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.445241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.445309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.445331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.450364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.450465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.450489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.455441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.455539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.455562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.460482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.460563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.460587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.465541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.465621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.465652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.470667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.470739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.470763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.475841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.475927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.475972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.480972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.481072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.481098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.486118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.486218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.486241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.491175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.491283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.491307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.496348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.496426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.501467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.501540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.501564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.062 [2024-07-15 16:33:05.506699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.062 [2024-07-15 16:33:05.506793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.062 [2024-07-15 16:33:05.506821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.512126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.512226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.512255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.517402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.517501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.517530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.522603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.522697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.522724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.527787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.527879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.527902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.533008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.533093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.533116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.538144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.538215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.538238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.543324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.543400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.543424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.548508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.548608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.548632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.553679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.553751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.553775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.558834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.558948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.558978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.564046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.564114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.564136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.569115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.569189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.569212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.574263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.574343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.574366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.579358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.579455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.579479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.584592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.584701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.584732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.589887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.590014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.590042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.595146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.595265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.595294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.600381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.600470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.600497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.605540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.605608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.605631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.063 [2024-07-15 16:33:05.610580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.063 [2024-07-15 16:33:05.610660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.063 [2024-07-15 16:33:05.610682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.615662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.615734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.615756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.620745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.620817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.620840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.625960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.626036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.626059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.630958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.631046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.631069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.636135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.636208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.636230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.641203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.641275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.641298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.646353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.646430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.646453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.651502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.651570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.651593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.656517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.656603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.656625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.661686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.661771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.661793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.666733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.666821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.666843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.671766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.671848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.671884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.676772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.676856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.676879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.681881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.681950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.681973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.686909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.686991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.687015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.692024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.692096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.692119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.697045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.697153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.697178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.702179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.702288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.702324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.707537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.707617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.707642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.712577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.712650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.712673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.717547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.717620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.717645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.722790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.722888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.722926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.727957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.728028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.728051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.733253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.733330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.733363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.738498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.738597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.738621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.743479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.743582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.743605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.748525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.748629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.748652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.753462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.753533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.323 [2024-07-15 16:33:05.753556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.323 [2024-07-15 16:33:05.758429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.323 [2024-07-15 16:33:05.758512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.758534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.763518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.763587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.763611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.768654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.768724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.768747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.773908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.773992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.774015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.778932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.779030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.779053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.784006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.784111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.784135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.789039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.789142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.789171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.794316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.794418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.794445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.799351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.799461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.799487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.804653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.804739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.804766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.809725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.809824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.809847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.815022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.815111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.815134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.820086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.820185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.820208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.825169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.825237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.825260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.830132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.830234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.830257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.835135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.835234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.835257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.840026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.840126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.840148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.845165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.845231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.845254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.850329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.850398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.850421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.855545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.855613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.855636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.860674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.860745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.860768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.865792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.865879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.865902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.324 [2024-07-15 16:33:05.870883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.324 [2024-07-15 16:33:05.870956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.324 [2024-07-15 16:33:05.870978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.875946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.876041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.876064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.881069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.881141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.881163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.886168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.886234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.886257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.891200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.891269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.891292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.896327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.896396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.896421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.901462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.901533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.901556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.906596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.906683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.906706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.911683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.911766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.911789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.916791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.916891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.916929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.921923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.922024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.922047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.927041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.927111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.927135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.583 [2024-07-15 16:33:05.932138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.583 [2024-07-15 16:33:05.932212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.583 [2024-07-15 16:33:05.932235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.937215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.937283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.937306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.942327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.942396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.942419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.947408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.947479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.947501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.952503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.952573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.952595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.957723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.957808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.957831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.962832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.962917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.962949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.967874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.967970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.967993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.973064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.973141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.973165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.584 [2024-07-15 16:33:05.978104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1278500) with pdu=0x2000190fef90 00:18:20.584 [2024-07-15 16:33:05.978174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.584 [2024-07-15 16:33:05.978197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.584 00:18:20.584 Latency(us) 00:18:20.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.584 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:20.584 nvme0n1 : 2.00 5974.04 746.75 0.00 0.00 2671.84 2189.50 5719.51 00:18:20.584 =================================================================================================================== 00:18:20.584 Total : 5974.04 746.75 0.00 0.00 2671.84 2189.50 5719.51 00:18:20.584 0 00:18:20.584 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:20.584 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:20.584 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:20.584 | .driver_specific 00:18:20.584 | .nvme_error 00:18:20.584 | .status_code 00:18:20.584 | .command_transient_transport_error' 00:18:20.584 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80722 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80722 ']' 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80722 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80722 00:18:20.855 killing process with pid 80722 00:18:20.855 Received shutdown signal, test time was about 2.000000 seconds 00:18:20.855 00:18:20.855 Latency(us) 00:18:20.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.855 =================================================================================================================== 00:18:20.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80722' 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80722 00:18:20.855 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80722 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80509 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80509 ']' 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80509 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80509 00:18:21.114 killing process with pid 80509 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80509' 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80509 00:18:21.114 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80509 00:18:21.373 00:18:21.373 real 0m18.847s 00:18:21.373 user 0m36.761s 00:18:21.373 sys 0m4.787s 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:21.373 ************************************ 00:18:21.373 END TEST nvmf_digest_error 00:18:21.373 ************************************ 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.373 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.373 rmmod nvme_tcp 00:18:21.373 rmmod nvme_fabrics 00:18:21.634 rmmod nvme_keyring 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80509 ']' 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80509 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80509 ']' 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80509 00:18:21.634 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80509) - No such process 00:18:21.634 Process with pid 80509 is not found 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80509 is not found' 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:21.634 00:18:21.634 real 0m38.564s 00:18:21.634 user 1m13.806s 00:18:21.634 sys 0m9.777s 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:21.634 16:33:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:21.634 ************************************ 00:18:21.634 END TEST nvmf_digest 00:18:21.634 ************************************ 00:18:21.634 16:33:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:21.634 16:33:07 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:21.634 16:33:07 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:21.634 16:33:07 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:21.634 16:33:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:21.634 16:33:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:21.634 16:33:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:21.634 ************************************ 00:18:21.634 START TEST nvmf_host_multipath 00:18:21.634 ************************************ 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:21.634 * Looking for test storage... 00:18:21.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.634 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:21.635 Cannot find device "nvmf_tgt_br" 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:21.635 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:21.893 Cannot find device "nvmf_tgt_br2" 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:21.893 Cannot find device "nvmf_tgt_br" 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:21.893 Cannot find device "nvmf_tgt_br2" 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:21.893 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:21.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:21.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:21.894 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:22.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:22.153 00:18:22.153 --- 10.0.0.2 ping statistics --- 00:18:22.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.153 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:22.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:18:22.153 00:18:22.153 --- 10.0.0.3 ping statistics --- 00:18:22.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.153 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:22.153 00:18:22.153 --- 10.0.0.1 ping statistics --- 00:18:22.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.153 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80983 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80983 00:18:22.153 16:33:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:22.154 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80983 ']' 00:18:22.154 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.154 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.154 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.154 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.154 16:33:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:22.154 [2024-07-15 16:33:07.571070] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:18:22.154 [2024-07-15 16:33:07.571172] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.412 [2024-07-15 16:33:07.718238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:22.412 [2024-07-15 16:33:07.846460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.412 [2024-07-15 16:33:07.846535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.412 [2024-07-15 16:33:07.846558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.412 [2024-07-15 16:33:07.846573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.412 [2024-07-15 16:33:07.846586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.412 [2024-07-15 16:33:07.846711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.412 [2024-07-15 16:33:07.846733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.412 [2024-07-15 16:33:07.905292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80983 00:18:23.348 16:33:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:23.348 [2024-07-15 16:33:08.895104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.606 16:33:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:23.606 Malloc0 00:18:23.865 16:33:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:24.124 16:33:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.383 16:33:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.641 [2024-07-15 16:33:09.941575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.641 16:33:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:24.641 [2024-07-15 16:33:10.165748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81034 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81034 /var/tmp/bdevperf.sock 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81034 ']' 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.641 16:33:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:26.043 16:33:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.043 16:33:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:26.043 16:33:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:26.043 16:33:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:26.301 Nvme0n1 00:18:26.301 16:33:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:26.867 Nvme0n1 00:18:26.867 16:33:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:26.867 16:33:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:27.802 16:33:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:27.802 16:33:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:28.060 16:33:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:28.319 16:33:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:28.319 16:33:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81085 00:18:28.319 16:33:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:28.319 16:33:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.931 Attaching 4 probes... 00:18:34.931 @path[10.0.0.2, 4421]: 17422 00:18:34.931 @path[10.0.0.2, 4421]: 17871 00:18:34.931 @path[10.0.0.2, 4421]: 17682 00:18:34.931 @path[10.0.0.2, 4421]: 17654 00:18:34.931 @path[10.0.0.2, 4421]: 17856 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81085 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:34.931 16:33:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:34.931 16:33:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:35.211 16:33:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:35.211 16:33:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.211 16:33:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81198 00:18:35.211 16:33:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.791 Attaching 4 probes... 00:18:41.791 @path[10.0.0.2, 4420]: 17705 00:18:41.791 @path[10.0.0.2, 4420]: 18135 00:18:41.791 @path[10.0.0.2, 4420]: 18141 00:18:41.791 @path[10.0.0.2, 4420]: 18094 00:18:41.791 @path[10.0.0.2, 4420]: 18174 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81198 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:41.791 16:33:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:41.791 16:33:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:42.050 16:33:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:42.050 16:33:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:42.050 16:33:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81310 00:18:42.050 16:33:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.614 Attaching 4 probes... 00:18:48.614 @path[10.0.0.2, 4421]: 13896 00:18:48.614 @path[10.0.0.2, 4421]: 17588 00:18:48.614 @path[10.0.0.2, 4421]: 17525 00:18:48.614 @path[10.0.0.2, 4421]: 17653 00:18:48.614 @path[10.0.0.2, 4421]: 17596 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81310 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:48.614 16:33:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:48.873 16:33:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:48.873 16:33:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81427 00:18:48.873 16:33:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:48.873 16:33:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.501 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:55.501 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:55.501 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:55.501 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.502 Attaching 4 probes... 00:18:55.502 00:18:55.502 00:18:55.502 00:18:55.502 00:18:55.502 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81427 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81541 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.502 16:33:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:02.059 16:33:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:02.059 16:33:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.059 Attaching 4 probes... 00:19:02.059 @path[10.0.0.2, 4421]: 16935 00:19:02.059 @path[10.0.0.2, 4421]: 17359 00:19:02.059 @path[10.0.0.2, 4421]: 17231 00:19:02.059 @path[10.0.0.2, 4421]: 17071 00:19:02.059 @path[10.0.0.2, 4421]: 17119 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81541 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:02.059 16:33:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:03.022 16:33:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:03.022 16:33:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81659 00:19:03.022 16:33:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:03.022 16:33:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:09.586 Attaching 4 probes... 00:19:09.586 @path[10.0.0.2, 4420]: 14474 00:19:09.586 @path[10.0.0.2, 4420]: 13712 00:19:09.586 @path[10.0.0.2, 4420]: 13512 00:19:09.586 @path[10.0.0.2, 4420]: 13596 00:19:09.586 @path[10.0.0.2, 4420]: 14113 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81659 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:09.586 16:33:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:09.845 [2024-07-15 16:33:55.145481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:09.845 16:33:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:09.845 16:33:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:16.504 16:34:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:16.504 16:34:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81839 00:19:16.504 16:34:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:16.504 16:34:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.071 Attaching 4 probes... 00:19:23.071 @path[10.0.0.2, 4421]: 16552 00:19:23.071 @path[10.0.0.2, 4421]: 16816 00:19:23.071 @path[10.0.0.2, 4421]: 16778 00:19:23.071 @path[10.0.0.2, 4421]: 16993 00:19:23.071 @path[10.0.0.2, 4421]: 16924 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81839 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81034 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81034 ']' 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81034 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81034 00:19:23.071 killing process with pid 81034 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81034' 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81034 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81034 00:19:23.071 Connection closed with partial response: 00:19:23.071 00:19:23.071 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81034 00:19:23.071 16:34:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:23.071 [2024-07-15 16:33:10.239192] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:23.071 [2024-07-15 16:33:10.239312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81034 ] 00:19:23.072 [2024-07-15 16:33:10.399006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.072 [2024-07-15 16:33:10.515498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.072 [2024-07-15 16:33:10.569208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:23.072 Running I/O for 90 seconds... 00:19:23.072 [2024-07-15 16:33:20.508067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.508809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.508844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.508903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.508939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.508974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.508995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.072 [2024-07-15 16:33:20.509441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.509685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.509725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.509762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.072 [2024-07-15 16:33:20.509810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:23.072 [2024-07-15 16:33:20.509831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.509845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.509880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.509898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.509920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.509935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.509956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.509971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.509992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.510275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.510971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.510992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.511007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.511043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.511078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.511145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.511225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.511285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.073 [2024-07-15 16:33:20.511342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.511381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.511416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.511452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.511487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.073 [2024-07-15 16:33:20.511523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:23.073 [2024-07-15 16:33:20.511544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.511558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.511579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.511613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.511648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.511671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.511714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.511741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.511795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.511829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.511864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.511890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.511926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.511984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.512537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.512586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.512621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.512657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.512692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.512728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.512763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.512784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.512799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.074 [2024-07-15 16:33:20.514338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:23.074 [2024-07-15 16:33:20.514748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.074 [2024-07-15 16:33:20.514763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.514784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.514799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.514820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.514834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.514872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.514891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.514913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.514928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.514949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.514965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.514990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.515006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.515028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.515053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.515076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.515092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.515113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.515143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:20.515163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:20.515179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.075 [2024-07-15 16:33:27.097781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.097843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.097900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.097936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.097957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.097971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:23.075 [2024-07-15 16:33:27.098620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.075 [2024-07-15 16:33:27.098634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.098670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.098706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.098765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.098814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.098850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.098901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.098937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.098972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.098993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.076 [2024-07-15 16:33:27.099631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:23.076 [2024-07-15 16:33:27.099897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.076 [2024-07-15 16:33:27.099912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.099933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.099948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.099968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.099984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.100608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.100973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.100994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.077 [2024-07-15 16:33:27.101393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.101428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.101464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.101500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.101535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:23.077 [2024-07-15 16:33:27.101556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.077 [2024-07-15 16:33:27.101571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.101592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.101607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.101649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.102465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.102961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.102991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.103006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.103051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:27.103110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:27.103460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:27.103475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.156680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.156751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.156809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.156830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.156867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.156886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.156937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.156953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.156974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.156989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.157024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.157067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.078 [2024-07-15 16:33:34.157106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:23.078 [2024-07-15 16:33:34.157490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.078 [2024-07-15 16:33:34.157504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.157539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.157574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.157609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.157647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.157683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.157981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.157996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.079 [2024-07-15 16:33:34.158887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.158965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.158986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.159001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.159021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.079 [2024-07-15 16:33:34.159035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.079 [2024-07-15 16:33:34.159056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.080 [2024-07-15 16:33:34.159761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.159985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.159999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.080 [2024-07-15 16:33:34.160369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:23.080 [2024-07-15 16:33:34.160390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.160966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.160986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.161001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.161029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.161045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.161081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.161098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.161953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.161980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.162031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.162077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.162121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:34.162166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:34.162551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:34.162571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.521972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.081 [2024-07-15 16:33:47.521986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.522007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:47.522022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.081 [2024-07-15 16:33:47.522043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.081 [2024-07-15 16:33:47.522057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.522120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.522155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.522190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.522224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.522258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.522293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.522945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.522973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.522989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.082 [2024-07-15 16:33:47.523406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.082 [2024-07-15 16:33:47.523429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.082 [2024-07-15 16:33:47.523443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.523761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.523789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.523819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.523847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.523889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.523937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.523978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.523992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.083 [2024-07-15 16:33:47.524271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.083 [2024-07-15 16:33:47.524716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.083 [2024-07-15 16:33:47.524729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.524979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.524994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.525009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.525040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.525085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.525115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.525143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.525171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.084 [2024-07-15 16:33:47.525216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd856d0 is same with the state(5) to be set 00:19:23.084 [2024-07-15 16:33:47.525259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93736 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94256 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94264 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94272 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94280 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94288 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94296 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94304 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94312 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93744 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93752 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.084 [2024-07-15 16:33:47.525777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.084 [2024-07-15 16:33:47.525786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.084 [2024-07-15 16:33:47.525796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93760 len:8 PRP1 0x0 PRP2 0x0 00:19:23.084 [2024-07-15 16:33:47.525809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.525822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.085 [2024-07-15 16:33:47.525831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.085 [2024-07-15 16:33:47.525841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93768 len:8 PRP1 0x0 PRP2 0x0 00:19:23.085 [2024-07-15 16:33:47.525854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.525881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.085 [2024-07-15 16:33:47.525891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.085 [2024-07-15 16:33:47.525901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93776 len:8 PRP1 0x0 PRP2 0x0 00:19:23.085 [2024-07-15 16:33:47.525914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.525927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.085 [2024-07-15 16:33:47.525936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.085 [2024-07-15 16:33:47.525946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93784 len:8 PRP1 0x0 PRP2 0x0 00:19:23.085 [2024-07-15 16:33:47.525965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.525979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.085 [2024-07-15 16:33:47.525989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.085 [2024-07-15 16:33:47.526004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93792 len:8 PRP1 0x0 PRP2 0x0 00:19:23.085 [2024-07-15 16:33:47.526017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.526030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.085 [2024-07-15 16:33:47.526040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.085 [2024-07-15 16:33:47.526050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93800 len:8 PRP1 0x0 PRP2 0x0 00:19:23.085 [2024-07-15 16:33:47.526063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.526119] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd856d0 was disconnected and freed. reset controller. 00:19:23.085 [2024-07-15 16:33:47.526246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.085 [2024-07-15 16:33:47.526271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.526286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.085 [2024-07-15 16:33:47.526299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.526320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.085 [2024-07-15 16:33:47.526333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.526347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.085 [2024-07-15 16:33:47.526364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.526378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.085 [2024-07-15 16:33:47.526391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.085 [2024-07-15 16:33:47.526411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff100 is same with the state(5) to be set 00:19:23.085 [2024-07-15 16:33:47.527553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.085 [2024-07-15 16:33:47.527592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcff100 (9): Bad file descriptor 00:19:23.085 [2024-07-15 16:33:47.528012] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.085 [2024-07-15 16:33:47.528045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcff100 with addr=10.0.0.2, port=4421 00:19:23.085 [2024-07-15 16:33:47.528062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff100 is same with the state(5) to be set 00:19:23.085 [2024-07-15 16:33:47.528095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcff100 (9): Bad file descriptor 00:19:23.085 [2024-07-15 16:33:47.528124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.085 [2024-07-15 16:33:47.528153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:23.085 [2024-07-15 16:33:47.528168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.085 [2024-07-15 16:33:47.528201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:23.085 [2024-07-15 16:33:47.528217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.085 [2024-07-15 16:33:57.588323] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:23.085 Received shutdown signal, test time was about 55.511301 seconds 00:19:23.085 00:19:23.085 Latency(us) 00:19:23.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.085 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:23.085 Verification LBA range: start 0x0 length 0x4000 00:19:23.085 Nvme0n1 : 55.51 7250.31 28.32 0.00 0.00 17619.70 1102.20 7015926.69 00:19:23.085 =================================================================================================================== 00:19:23.085 Total : 7250.31 28.32 0.00 0.00 17619.70 1102.20 7015926.69 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.085 rmmod nvme_tcp 00:19:23.085 rmmod nvme_fabrics 00:19:23.085 rmmod nvme_keyring 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80983 ']' 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80983 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80983 ']' 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80983 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80983 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:23.085 killing process with pid 80983 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80983' 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80983 00:19:23.085 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80983 00:19:23.354 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.354 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:23.357 00:19:23.357 real 1m1.673s 00:19:23.357 user 2m51.499s 00:19:23.357 sys 0m18.168s 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.357 16:34:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:23.357 ************************************ 00:19:23.357 END TEST nvmf_host_multipath 00:19:23.357 ************************************ 00:19:23.357 16:34:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:23.357 16:34:08 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:23.357 16:34:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:23.357 16:34:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.357 16:34:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:23.357 ************************************ 00:19:23.357 START TEST nvmf_timeout 00:19:23.357 ************************************ 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:23.357 * Looking for test storage... 00:19:23.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:23.357 Cannot find device "nvmf_tgt_br" 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:23.357 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:23.622 Cannot find device "nvmf_tgt_br2" 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:23.622 Cannot find device "nvmf_tgt_br" 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:23.622 Cannot find device "nvmf_tgt_br2" 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:23.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:23.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:23.622 16:34:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:23.622 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:23.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:19:23.881 00:19:23.881 --- 10.0.0.2 ping statistics --- 00:19:23.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.881 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:23.881 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:23.881 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:19:23.881 00:19:23.881 --- 10.0.0.3 ping statistics --- 00:19:23.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.881 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:23.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:23.881 00:19:23.881 --- 10.0.0.1 ping statistics --- 00:19:23.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.881 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82143 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82143 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82143 ']' 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.881 16:34:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:23.881 [2024-07-15 16:34:09.266328] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:23.881 [2024-07-15 16:34:09.266402] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.881 [2024-07-15 16:34:09.405694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:24.140 [2024-07-15 16:34:09.541630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.140 [2024-07-15 16:34:09.541715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.140 [2024-07-15 16:34:09.541730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.140 [2024-07-15 16:34:09.541741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.140 [2024-07-15 16:34:09.541750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.140 [2024-07-15 16:34:09.541919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.140 [2024-07-15 16:34:09.542120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.140 [2024-07-15 16:34:09.598851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:24.706 16:34:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.706 16:34:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:24.706 16:34:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.706 16:34:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.706 16:34:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:24.965 16:34:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.965 16:34:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.965 16:34:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:25.224 [2024-07-15 16:34:10.536139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.224 16:34:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:25.483 Malloc0 00:19:25.483 16:34:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:25.741 16:34:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.741 16:34:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.308 [2024-07-15 16:34:11.568380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82198 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82198 /var/tmp/bdevperf.sock 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82198 ']' 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.308 16:34:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:26.308 [2024-07-15 16:34:11.635113] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:26.308 [2024-07-15 16:34:11.635190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82198 ] 00:19:26.308 [2024-07-15 16:34:11.771404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.567 [2024-07-15 16:34:11.884413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.567 [2024-07-15 16:34:11.937277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:27.135 16:34:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.135 16:34:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:27.135 16:34:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:27.393 16:34:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:27.652 NVMe0n1 00:19:27.652 16:34:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82216 00:19:27.652 16:34:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.652 16:34:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:27.652 Running I/O for 10 seconds... 00:19:28.688 16:34:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.948 [2024-07-15 16:34:14.292988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113de50 is same with the state(5) to be set 00:19:28.948 [2024-07-15 16:34:14.293053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113de50 is same with the state(5) to be set 00:19:28.948 [2024-07-15 16:34:14.293065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113de50 is same with the state(5) to be set 00:19:28.948 [2024-07-15 16:34:14.293081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113de50 is same with the state(5) to be set 00:19:28.948 [2024-07-15 16:34:14.293108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113de50 is same with the state(5) to be set 00:19:28.948 [2024-07-15 16:34:14.293619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.948 [2024-07-15 16:34:14.293655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.948 [2024-07-15 16:34:14.293677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.293689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.293710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.293730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.293751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.293771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.293955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.293975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.293987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.293996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.294018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.294039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.294060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.294080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.294100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.949 [2024-07-15 16:34:14.294120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.949 [2024-07-15 16:34:14.294559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.949 [2024-07-15 16:34:14.294568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.294771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.294982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.294991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.950 [2024-07-15 16:34:14.295275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.950 [2024-07-15 16:34:14.295446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.950 [2024-07-15 16:34:14.295455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.951 [2024-07-15 16:34:14.295682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.295981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.295992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.951 [2024-07-15 16:34:14.296001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8874d0 is same with the state(5) to be set 00:19:28.951 [2024-07-15 16:34:14.296024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75512 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76064 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76072 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76080 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76088 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76096 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76104 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76112 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76120 len:8 PRP1 0x0 PRP2 0x0 00:19:28.951 [2024-07-15 16:34:14.296324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.951 [2024-07-15 16:34:14.296332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.951 [2024-07-15 16:34:14.296339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.951 [2024-07-15 16:34:14.296347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76128 len:8 PRP1 0x0 PRP2 0x0 00:19:28.952 [2024-07-15 16:34:14.296356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.296365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.952 [2024-07-15 16:34:14.296371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.952 [2024-07-15 16:34:14.296379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76136 len:8 PRP1 0x0 PRP2 0x0 00:19:28.952 [2024-07-15 16:34:14.296387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.296396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.952 [2024-07-15 16:34:14.296404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.952 [2024-07-15 16:34:14.296411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76144 len:8 PRP1 0x0 PRP2 0x0 00:19:28.952 [2024-07-15 16:34:14.296420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.296429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.952 [2024-07-15 16:34:14.296437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.952 [2024-07-15 16:34:14.296444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76152 len:8 PRP1 0x0 PRP2 0x0 00:19:28.952 [2024-07-15 16:34:14.296452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 16:34:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:28.952 [2024-07-15 16:34:14.309544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.952 [2024-07-15 16:34:14.309573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.952 [2024-07-15 16:34:14.309585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76160 len:8 PRP1 0x0 PRP2 0x0 00:19:28.952 [2024-07-15 16:34:14.309598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.309611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.952 [2024-07-15 16:34:14.309618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.952 [2024-07-15 16:34:14.309627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76168 len:8 PRP1 0x0 PRP2 0x0 00:19:28.952 [2024-07-15 16:34:14.309636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.309724] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8874d0 was disconnected and freed. reset controller. 00:19:28.952 [2024-07-15 16:34:14.309902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.952 [2024-07-15 16:34:14.309920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.309933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.952 [2024-07-15 16:34:14.309943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.309954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.952 [2024-07-15 16:34:14.309963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.309972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.952 [2024-07-15 16:34:14.309981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.952 [2024-07-15 16:34:14.309990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83cd40 is same with the state(5) to be set 00:19:28.952 [2024-07-15 16:34:14.310230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.952 [2024-07-15 16:34:14.310253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83cd40 (9): Bad file descriptor 00:19:28.952 [2024-07-15 16:34:14.310354] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.952 [2024-07-15 16:34:14.310375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83cd40 with addr=10.0.0.2, port=4420 00:19:28.952 [2024-07-15 16:34:14.310386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83cd40 is same with the state(5) to be set 00:19:28.952 [2024-07-15 16:34:14.310404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83cd40 (9): Bad file descriptor 00:19:28.952 [2024-07-15 16:34:14.310420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.952 [2024-07-15 16:34:14.310429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.952 [2024-07-15 16:34:14.310440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.952 [2024-07-15 16:34:14.310460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.952 [2024-07-15 16:34:14.310471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.853 [2024-07-15 16:34:16.310806] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.853 [2024-07-15 16:34:16.310888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83cd40 with addr=10.0.0.2, port=4420 00:19:30.853 [2024-07-15 16:34:16.310906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83cd40 is same with the state(5) to be set 00:19:30.853 [2024-07-15 16:34:16.310933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83cd40 (9): Bad file descriptor 00:19:30.853 [2024-07-15 16:34:16.310952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.853 [2024-07-15 16:34:16.310962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.853 [2024-07-15 16:34:16.310974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.853 [2024-07-15 16:34:16.311001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.853 [2024-07-15 16:34:16.311013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.853 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:30.853 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:30.853 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:31.111 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:31.111 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:31.111 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:31.111 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:31.369 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:31.369 16:34:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:33.268 [2024-07-15 16:34:18.311266] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.268 [2024-07-15 16:34:18.311324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83cd40 with addr=10.0.0.2, port=4420 00:19:33.268 [2024-07-15 16:34:18.311341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83cd40 is same with the state(5) to be set 00:19:33.268 [2024-07-15 16:34:18.311367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83cd40 (9): Bad file descriptor 00:19:33.268 [2024-07-15 16:34:18.311400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.268 [2024-07-15 16:34:18.311412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:33.268 [2024-07-15 16:34:18.311424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.268 [2024-07-15 16:34:18.311450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.268 [2024-07-15 16:34:18.311462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.173 [2024-07-15 16:34:20.311612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.173 [2024-07-15 16:34:20.311688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.173 [2024-07-15 16:34:20.311718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.173 [2024-07-15 16:34:20.311728] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:35.173 [2024-07-15 16:34:20.311758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:36.109 00:19:36.109 Latency(us) 00:19:36.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.109 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.109 Verification LBA range: start 0x0 length 0x4000 00:19:36.109 NVMe0n1 : 8.12 1157.28 4.52 15.77 0.00 109192.01 3872.58 7046430.72 00:19:36.109 =================================================================================================================== 00:19:36.109 Total : 1157.28 4.52 15.77 0.00 109192.01 3872.58 7046430.72 00:19:36.109 0 00:19:36.381 16:34:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:36.381 16:34:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.381 16:34:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:36.679 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:36.679 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:36.679 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:36.679 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82216 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82198 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82198 ']' 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82198 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82198 00:19:36.938 killing process with pid 82198 00:19:36.938 Received shutdown signal, test time was about 9.193537 seconds 00:19:36.938 00:19:36.938 Latency(us) 00:19:36.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.938 =================================================================================================================== 00:19:36.938 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82198' 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82198 00:19:36.938 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82198 00:19:37.197 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.456 [2024-07-15 16:34:22.816960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82338 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:37.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82338 /var/tmp/bdevperf.sock 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82338 ']' 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.456 16:34:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.456 [2024-07-15 16:34:22.892111] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:37.456 [2024-07-15 16:34:22.892526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82338 ] 00:19:37.714 [2024-07-15 16:34:23.030579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.714 [2024-07-15 16:34:23.132003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.714 [2024-07-15 16:34:23.185613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:38.280 16:34:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.280 16:34:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:38.280 16:34:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:38.538 16:34:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:38.795 NVMe0n1 00:19:38.795 16:34:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82356 00:19:38.795 16:34:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.795 16:34:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:39.053 Running I/O for 10 seconds... 00:19:39.988 16:34:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.249 [2024-07-15 16:34:25.587085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.249 [2024-07-15 16:34:25.587527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.249 [2024-07-15 16:34:25.587539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.587549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.587570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.587592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.587613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.587634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.587972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.587985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.587995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.588016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.250 [2024-07-15 16:34:25.588181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.250 [2024-07-15 16:34:25.588400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.250 [2024-07-15 16:34:25.588409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.588987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.588999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.251 [2024-07-15 16:34:25.589330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.251 [2024-07-15 16:34:25.589341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.252 [2024-07-15 16:34:25.589879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c4d0 is same with the state(5) to be set 00:19:40.252 [2024-07-15 16:34:25.589909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.252 [2024-07-15 16:34:25.589916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.252 [2024-07-15 16:34:25.589925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64520 len:8 PRP1 0x0 PRP2 0x0 00:19:40.252 [2024-07-15 16:34:25.589934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.252 [2024-07-15 16:34:25.589987] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e0c4d0 was disconnected and freed. reset controller. 00:19:40.252 [2024-07-15 16:34:25.590249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.252 [2024-07-15 16:34:25.590323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:40.252 [2024-07-15 16:34:25.590426] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.252 [2024-07-15 16:34:25.590447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1d40 with addr=10.0.0.2, port=4420 00:19:40.252 [2024-07-15 16:34:25.590458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1d40 is same with the state(5) to be set 00:19:40.252 [2024-07-15 16:34:25.590475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:40.252 [2024-07-15 16:34:25.590500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.252 [2024-07-15 16:34:25.590510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.252 [2024-07-15 16:34:25.590521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.252 [2024-07-15 16:34:25.590541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.252 [2024-07-15 16:34:25.590552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.252 16:34:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:41.279 [2024-07-15 16:34:26.590690] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.279 [2024-07-15 16:34:26.590778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1d40 with addr=10.0.0.2, port=4420 00:19:41.279 [2024-07-15 16:34:26.590795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1d40 is same with the state(5) to be set 00:19:41.280 [2024-07-15 16:34:26.590821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:41.280 [2024-07-15 16:34:26.590840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:41.280 [2024-07-15 16:34:26.590849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:41.280 [2024-07-15 16:34:26.590860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.280 [2024-07-15 16:34:26.590904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.280 [2024-07-15 16:34:26.590918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.280 16:34:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.537 [2024-07-15 16:34:26.854842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.537 16:34:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82356 00:19:42.101 [2024-07-15 16:34:27.606929] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:50.271 00:19:50.271 Latency(us) 00:19:50.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.271 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.271 Verification LBA range: start 0x0 length 0x4000 00:19:50.271 NVMe0n1 : 10.01 6351.60 24.81 0.00 0.00 20106.91 1325.61 3019898.88 00:19:50.271 =================================================================================================================== 00:19:50.271 Total : 6351.60 24.81 0.00 0.00 20106.91 1325.61 3019898.88 00:19:50.271 0 00:19:50.271 16:34:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82466 00:19:50.271 16:34:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.271 16:34:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:50.271 Running I/O for 10 seconds... 00:19:50.271 16:34:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.271 [2024-07-15 16:34:35.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.271 [2024-07-15 16:34:35.784702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.784981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.271 [2024-07-15 16:34:35.784990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.271 [2024-07-15 16:34:35.785001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.785981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.785991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.272 [2024-07-15 16:34:35.786341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.272 [2024-07-15 16:34:35.786352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.786983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.786993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.787013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.787033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.787053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.787073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.273 [2024-07-15 16:34:35.787401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.273 [2024-07-15 16:34:35.787422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0faa0 is same with the state(5) to be set 00:19:50.273 [2024-07-15 16:34:35.787450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.273 [2024-07-15 16:34:35.787462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.273 [2024-07-15 16:34:35.787470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67736 len:8 PRP1 0x0 PRP2 0x0 00:19:50.273 [2024-07-15 16:34:35.787479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787531] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e0faa0 was disconnected and freed. reset controller. 00:19:50.273 [2024-07-15 16:34:35.787607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.273 [2024-07-15 16:34:35.787629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.273 [2024-07-15 16:34:35.787650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.273 [2024-07-15 16:34:35.787669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.273 [2024-07-15 16:34:35.787687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.273 [2024-07-15 16:34:35.787696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1d40 is same with the state(5) to be set 00:19:50.273 [2024-07-15 16:34:35.788061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:50.273 [2024-07-15 16:34:35.788227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:50.273 [2024-07-15 16:34:35.788448] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.273 [2024-07-15 16:34:35.788472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1d40 with addr=10.0.0.2, port=4420 00:19:50.273 [2024-07-15 16:34:35.788483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1d40 is same with the state(5) to be set 00:19:50.273 [2024-07-15 16:34:35.788502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:50.273 [2024-07-15 16:34:35.788532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:50.273 [2024-07-15 16:34:35.788543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:50.273 [2024-07-15 16:34:35.788554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:50.273 [2024-07-15 16:34:35.788574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:50.273 [2024-07-15 16:34:35.788585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:50.273 16:34:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:51.648 [2024-07-15 16:34:36.788717] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.648 [2024-07-15 16:34:36.788939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1d40 with addr=10.0.0.2, port=4420 00:19:51.648 [2024-07-15 16:34:36.789101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1d40 is same with the state(5) to be set 00:19:51.648 [2024-07-15 16:34:36.789188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:51.648 [2024-07-15 16:34:36.789345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.648 [2024-07-15 16:34:36.789413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:51.648 [2024-07-15 16:34:36.789543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.648 [2024-07-15 16:34:36.789602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.648 [2024-07-15 16:34:36.789766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.644 [2024-07-15 16:34:37.790084] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.644 [2024-07-15 16:34:37.790368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1d40 with addr=10.0.0.2, port=4420 00:19:52.644 [2024-07-15 16:34:37.790516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1d40 is same with the state(5) to be set 00:19:52.644 [2024-07-15 16:34:37.790675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:52.644 [2024-07-15 16:34:37.790824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.644 [2024-07-15 16:34:37.790960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.644 [2024-07-15 16:34:37.791106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.644 [2024-07-15 16:34:37.791318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.644 [2024-07-15 16:34:37.791437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.580 [2024-07-15 16:34:38.792784] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.580 [2024-07-15 16:34:38.793067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc1d40 with addr=10.0.0.2, port=4420 00:19:53.580 [2024-07-15 16:34:38.793302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1d40 is same with the state(5) to be set 00:19:53.580 [2024-07-15 16:34:38.793686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc1d40 (9): Bad file descriptor 00:19:53.580 [2024-07-15 16:34:38.794084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.580 [2024-07-15 16:34:38.794240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.580 [2024-07-15 16:34:38.794386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.580 [2024-07-15 16:34:38.798322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.580 [2024-07-15 16:34:38.798459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.580 16:34:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.580 [2024-07-15 16:34:39.093728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.580 16:34:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82466 00:19:54.514 [2024-07-15 16:34:39.837080] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.778 00:19:59.778 Latency(us) 00:19:59.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.778 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.778 Verification LBA range: start 0x0 length 0x4000 00:19:59.778 NVMe0n1 : 10.01 5357.62 20.93 3706.13 0.00 14094.51 666.53 3019898.88 00:19:59.778 =================================================================================================================== 00:19:59.778 Total : 5357.62 20.93 3706.13 0.00 14094.51 0.00 3019898.88 00:19:59.778 0 00:19:59.778 16:34:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82338 00:19:59.778 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82338 ']' 00:19:59.778 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82338 00:19:59.778 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:59.778 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.778 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82338 00:19:59.778 killing process with pid 82338 00:19:59.778 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.778 00:19:59.778 Latency(us) 00:19:59.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.779 =================================================================================================================== 00:19:59.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82338' 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82338 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82338 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82575 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82575 /var/tmp/bdevperf.sock 00:19:59.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82575 ']' 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.779 16:34:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:59.779 [2024-07-15 16:34:44.937146] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:19:59.779 [2024-07-15 16:34:44.937519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82575 ] 00:19:59.779 [2024-07-15 16:34:45.078451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.779 [2024-07-15 16:34:45.186678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.779 [2024-07-15 16:34:45.243495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:00.354 16:34:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.354 16:34:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:00.354 16:34:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82591 00:20:00.354 16:34:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:00.354 16:34:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82575 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:00.613 16:34:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:01.179 NVMe0n1 00:20:01.179 16:34:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82633 00:20:01.179 16:34:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.179 16:34:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:01.179 Running I/O for 10 seconds... 00:20:02.114 16:34:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.375 [2024-07-15 16:34:47.698996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.375 [2024-07-15 16:34:47.699055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.375 [2024-07-15 16:34:47.699067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.375 [2024-07-15 16:34:47.699077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.375 [2024-07-15 16:34:47.699086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.375 [2024-07-15 16:34:47.699095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.375 [2024-07-15 16:34:47.699104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.376 [2024-07-15 16:34:47.699919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.699994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190b80 is same with the state(5) to be set 00:20:02.377 [2024-07-15 16:34:47.700228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.377 [2024-07-15 16:34:47.700892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.377 [2024-07-15 16:34:47.700903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.700912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.700924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.700933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.700944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.700953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.700965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.700984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.700996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.378 [2024-07-15 16:34:47.701840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.378 [2024-07-15 16:34:47.701852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.701870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.701883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.701892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.701903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.701912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.701924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.701933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.701945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.701954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.701965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.701975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.701986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.701995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.379 [2024-07-15 16:34:47.702757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.379 [2024-07-15 16:34:47.702766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.702988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.702999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.703009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.703020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.380 [2024-07-15 16:34:47.703029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.703040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea1310 is same with the state(5) to be set 00:20:02.380 [2024-07-15 16:34:47.703052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.380 [2024-07-15 16:34:47.703060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.380 [2024-07-15 16:34:47.703073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18928 len:8 PRP1 0x0 PRP2 0x0 00:20:02.380 [2024-07-15 16:34:47.703083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.380 [2024-07-15 16:34:47.703136] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ea1310 was disconnected and freed. reset controller. 00:20:02.380 [2024-07-15 16:34:47.703413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:02.380 [2024-07-15 16:34:47.703490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32c00 (9): Bad file descriptor 00:20:02.380 [2024-07-15 16:34:47.703602] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:02.380 [2024-07-15 16:34:47.703623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32c00 with addr=10.0.0.2, port=4420 00:20:02.380 [2024-07-15 16:34:47.703634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32c00 is same with the state(5) to be set 00:20:02.380 [2024-07-15 16:34:47.703652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32c00 (9): Bad file descriptor 00:20:02.380 [2024-07-15 16:34:47.703668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:02.380 [2024-07-15 16:34:47.703678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:02.380 [2024-07-15 16:34:47.703688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:02.380 [2024-07-15 16:34:47.703708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:02.380 [2024-07-15 16:34:47.703725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:02.380 16:34:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82633 00:20:04.283 [2024-07-15 16:34:49.704001] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.283 [2024-07-15 16:34:49.704081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32c00 with addr=10.0.0.2, port=4420 00:20:04.283 [2024-07-15 16:34:49.704098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32c00 is same with the state(5) to be set 00:20:04.283 [2024-07-15 16:34:49.704125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32c00 (9): Bad file descriptor 00:20:04.283 [2024-07-15 16:34:49.704156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.283 [2024-07-15 16:34:49.704167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.283 [2024-07-15 16:34:49.704178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.283 [2024-07-15 16:34:49.704206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.283 [2024-07-15 16:34:49.704218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.186 [2024-07-15 16:34:51.704418] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.186 [2024-07-15 16:34:51.704482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32c00 with addr=10.0.0.2, port=4420 00:20:06.186 [2024-07-15 16:34:51.704498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32c00 is same with the state(5) to be set 00:20:06.186 [2024-07-15 16:34:51.704524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32c00 (9): Bad file descriptor 00:20:06.186 [2024-07-15 16:34:51.704544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.186 [2024-07-15 16:34:51.704553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:06.186 [2024-07-15 16:34:51.704564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.186 [2024-07-15 16:34:51.704591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.186 [2024-07-15 16:34:51.704603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:08.719 [2024-07-15 16:34:53.704763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:08.719 [2024-07-15 16:34:53.704833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:08.719 [2024-07-15 16:34:53.704846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:08.719 [2024-07-15 16:34:53.704870] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:08.719 [2024-07-15 16:34:53.704900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.287 00:20:09.287 Latency(us) 00:20:09.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.287 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:09.287 NVMe0n1 : 8.14 2122.35 8.29 15.73 0.00 59767.66 8162.21 7015926.69 00:20:09.287 =================================================================================================================== 00:20:09.287 Total : 2122.35 8.29 15.73 0.00 59767.66 8162.21 7015926.69 00:20:09.287 0 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:09.287 Attaching 5 probes... 00:20:09.287 1343.493917: reset bdev controller NVMe0 00:20:09.287 1343.618902: reconnect bdev controller NVMe0 00:20:09.287 3343.928410: reconnect delay bdev controller NVMe0 00:20:09.287 3343.957059: reconnect bdev controller NVMe0 00:20:09.287 5344.371254: reconnect delay bdev controller NVMe0 00:20:09.287 5344.391875: reconnect bdev controller NVMe0 00:20:09.287 7344.821141: reconnect delay bdev controller NVMe0 00:20:09.287 7344.843030: reconnect bdev controller NVMe0 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82591 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82575 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82575 ']' 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82575 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82575 00:20:09.287 killing process with pid 82575 00:20:09.287 Received shutdown signal, test time was about 8.195476 seconds 00:20:09.287 00:20:09.287 Latency(us) 00:20:09.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.287 =================================================================================================================== 00:20:09.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82575' 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82575 00:20:09.287 16:34:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82575 00:20:09.545 16:34:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.804 rmmod nvme_tcp 00:20:09.804 rmmod nvme_fabrics 00:20:09.804 rmmod nvme_keyring 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82143 ']' 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82143 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82143 ']' 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82143 00:20:09.804 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:10.062 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.062 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82143 00:20:10.062 killing process with pid 82143 00:20:10.062 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:10.062 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:10.062 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82143' 00:20:10.062 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82143 00:20:10.062 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82143 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:10.320 ************************************ 00:20:10.320 END TEST nvmf_timeout 00:20:10.320 ************************************ 00:20:10.320 00:20:10.320 real 0m46.903s 00:20:10.320 user 2m17.669s 00:20:10.320 sys 0m5.644s 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.320 16:34:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.320 16:34:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.320 16:34:55 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:10.320 16:34:55 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:10.320 16:34:55 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.320 16:34:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.320 16:34:55 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:10.320 00:20:10.320 real 12m23.401s 00:20:10.320 user 30m10.436s 00:20:10.320 sys 3m2.885s 00:20:10.320 ************************************ 00:20:10.320 END TEST nvmf_tcp 00:20:10.320 ************************************ 00:20:10.320 16:34:55 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.320 16:34:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.320 16:34:55 -- common/autotest_common.sh@1142 -- # return 0 00:20:10.320 16:34:55 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:10.320 16:34:55 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:10.320 16:34:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:10.320 16:34:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.320 16:34:55 -- common/autotest_common.sh@10 -- # set +x 00:20:10.320 ************************************ 00:20:10.320 START TEST nvmf_dif 00:20:10.320 ************************************ 00:20:10.320 16:34:55 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:10.320 * Looking for test storage... 00:20:10.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:10.621 16:34:55 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.621 16:34:55 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.621 16:34:55 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.621 16:34:55 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.621 16:34:55 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.621 16:34:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.622 16:34:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.622 16:34:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.622 16:34:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:10.622 16:34:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.622 16:34:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:10.622 16:34:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:10.622 16:34:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:10.622 16:34:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:10.622 16:34:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.622 16:34:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:10.622 16:34:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:10.622 Cannot find device "nvmf_tgt_br" 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.622 Cannot find device "nvmf_tgt_br2" 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:10.622 Cannot find device "nvmf_tgt_br" 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:10.622 Cannot find device "nvmf_tgt_br2" 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:10.622 16:34:55 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:10.622 16:34:56 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:10.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:20:10.881 00:20:10.881 --- 10.0.0.2 ping statistics --- 00:20:10.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.881 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:10.881 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:10.881 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:10.881 00:20:10.881 --- 10.0.0.3 ping statistics --- 00:20:10.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.881 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:10.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:20:10.881 00:20:10.881 --- 10.0.0.1 ping statistics --- 00:20:10.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.881 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:10.881 16:34:56 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:11.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:11.139 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:11.139 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.139 16:34:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:11.139 16:34:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.139 16:34:56 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.139 16:34:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:11.139 16:34:56 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83070 00:20:11.140 16:34:56 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:11.140 16:34:56 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83070 00:20:11.140 16:34:56 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83070 ']' 00:20:11.140 16:34:56 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.140 16:34:56 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.140 16:34:56 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.140 16:34:56 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.140 16:34:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:11.398 [2024-07-15 16:34:56.701608] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:20:11.398 [2024-07-15 16:34:56.701701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.398 [2024-07-15 16:34:56.842928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.657 [2024-07-15 16:34:56.950580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.657 [2024-07-15 16:34:56.950638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.657 [2024-07-15 16:34:56.950649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.657 [2024-07-15 16:34:56.950657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.657 [2024-07-15 16:34:56.950665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.657 [2024-07-15 16:34:56.950691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.657 [2024-07-15 16:34:57.005475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:12.225 16:34:57 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:12.225 16:34:57 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.225 16:34:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:12.225 16:34:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:12.225 [2024-07-15 16:34:57.675732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.225 16:34:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.225 16:34:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:12.225 ************************************ 00:20:12.225 START TEST fio_dif_1_default 00:20:12.225 ************************************ 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:12.225 bdev_null0 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:12.225 [2024-07-15 16:34:57.723884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.225 { 00:20:12.225 "params": { 00:20:12.225 "name": "Nvme$subsystem", 00:20:12.225 "trtype": "$TEST_TRANSPORT", 00:20:12.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.225 "adrfam": "ipv4", 00:20:12.225 "trsvcid": "$NVMF_PORT", 00:20:12.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.225 "hdgst": ${hdgst:-false}, 00:20:12.225 "ddgst": ${ddgst:-false} 00:20:12.225 }, 00:20:12.225 "method": "bdev_nvme_attach_controller" 00:20:12.225 } 00:20:12.225 EOF 00:20:12.225 )") 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:12.225 16:34:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:12.225 "params": { 00:20:12.225 "name": "Nvme0", 00:20:12.226 "trtype": "tcp", 00:20:12.226 "traddr": "10.0.0.2", 00:20:12.226 "adrfam": "ipv4", 00:20:12.226 "trsvcid": "4420", 00:20:12.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:12.226 "hdgst": false, 00:20:12.226 "ddgst": false 00:20:12.226 }, 00:20:12.226 "method": "bdev_nvme_attach_controller" 00:20:12.226 }' 00:20:12.226 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:12.226 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:12.226 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.226 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.226 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:12.226 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:12.484 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:12.484 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:12.484 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:12.484 16:34:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.484 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:12.484 fio-3.35 00:20:12.484 Starting 1 thread 00:20:24.700 00:20:24.700 filename0: (groupid=0, jobs=1): err= 0: pid=83131: Mon Jul 15 16:35:08 2024 00:20:24.700 read: IOPS=8539, BW=33.4MiB/s (35.0MB/s)(334MiB/10001msec) 00:20:24.700 slat (nsec): min=6639, max=73452, avg=8752.05, stdev=2631.49 00:20:24.700 clat (usec): min=356, max=3481, avg=442.69, stdev=40.67 00:20:24.700 lat (usec): min=363, max=3516, avg=451.44, stdev=41.13 00:20:24.700 clat percentiles (usec): 00:20:24.700 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 420], 00:20:24.700 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 445], 00:20:24.700 | 70.00th=[ 453], 80.00th=[ 461], 90.00th=[ 474], 95.00th=[ 490], 00:20:24.700 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 906], 99.95th=[ 979], 00:20:24.700 | 99.99th=[ 1319] 00:20:24.700 bw ( KiB/s): min=31168, max=34720, per=100.00%, avg=34177.68, stdev=847.08, samples=19 00:20:24.700 iops : min= 7792, max= 8680, avg=8544.42, stdev=211.77, samples=19 00:20:24.700 lat (usec) : 500=96.55%, 750=3.33%, 1000=0.09% 00:20:24.700 lat (msec) : 2=0.02%, 4=0.01% 00:20:24.700 cpu : usr=85.27%, sys=12.88%, ctx=27, majf=0, minf=0 00:20:24.700 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.700 issued rwts: total=85400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.700 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:24.700 00:20:24.700 Run status group 0 (all jobs): 00:20:24.700 READ: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=334MiB (350MB), run=10001-10001msec 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.700 00:20:24.700 real 0m10.994s 00:20:24.700 user 0m9.160s 00:20:24.700 sys 0m1.550s 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 ************************************ 00:20:24.700 END TEST fio_dif_1_default 00:20:24.700 ************************************ 00:20:24.700 16:35:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:24.700 16:35:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:24.700 16:35:08 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:24.700 16:35:08 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 ************************************ 00:20:24.700 START TEST fio_dif_1_multi_subsystems 00:20:24.700 ************************************ 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 bdev_null0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 [2024-07-15 16:35:08.763681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:24.700 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.701 bdev_null1 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.701 { 00:20:24.701 "params": { 00:20:24.701 "name": "Nvme$subsystem", 00:20:24.701 "trtype": "$TEST_TRANSPORT", 00:20:24.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.701 "adrfam": "ipv4", 00:20:24.701 "trsvcid": "$NVMF_PORT", 00:20:24.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.701 "hdgst": ${hdgst:-false}, 00:20:24.701 "ddgst": ${ddgst:-false} 00:20:24.701 }, 00:20:24.701 "method": "bdev_nvme_attach_controller" 00:20:24.701 } 00:20:24.701 EOF 00:20:24.701 )") 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.701 { 00:20:24.701 "params": { 00:20:24.701 "name": "Nvme$subsystem", 00:20:24.701 "trtype": "$TEST_TRANSPORT", 00:20:24.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.701 "adrfam": "ipv4", 00:20:24.701 "trsvcid": "$NVMF_PORT", 00:20:24.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.701 "hdgst": ${hdgst:-false}, 00:20:24.701 "ddgst": ${ddgst:-false} 00:20:24.701 }, 00:20:24.701 "method": "bdev_nvme_attach_controller" 00:20:24.701 } 00:20:24.701 EOF 00:20:24.701 )") 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:24.701 "params": { 00:20:24.701 "name": "Nvme0", 00:20:24.701 "trtype": "tcp", 00:20:24.701 "traddr": "10.0.0.2", 00:20:24.701 "adrfam": "ipv4", 00:20:24.701 "trsvcid": "4420", 00:20:24.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:24.701 "hdgst": false, 00:20:24.701 "ddgst": false 00:20:24.701 }, 00:20:24.701 "method": "bdev_nvme_attach_controller" 00:20:24.701 },{ 00:20:24.701 "params": { 00:20:24.701 "name": "Nvme1", 00:20:24.701 "trtype": "tcp", 00:20:24.701 "traddr": "10.0.0.2", 00:20:24.701 "adrfam": "ipv4", 00:20:24.701 "trsvcid": "4420", 00:20:24.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.701 "hdgst": false, 00:20:24.701 "ddgst": false 00:20:24.701 }, 00:20:24.701 "method": "bdev_nvme_attach_controller" 00:20:24.701 }' 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.701 16:35:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.702 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:24.702 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:24.702 fio-3.35 00:20:24.702 Starting 2 threads 00:20:34.672 00:20:34.672 filename0: (groupid=0, jobs=1): err= 0: pid=83290: Mon Jul 15 16:35:19 2024 00:20:34.672 read: IOPS=4783, BW=18.7MiB/s (19.6MB/s)(187MiB/10001msec) 00:20:34.672 slat (usec): min=7, max=584, avg=13.66, stdev= 4.46 00:20:34.672 clat (usec): min=413, max=4015, avg=798.42, stdev=68.97 00:20:34.672 lat (usec): min=421, max=4045, avg=812.08, stdev=69.35 00:20:34.672 clat percentiles (usec): 00:20:34.672 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 766], 00:20:34.672 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:34.672 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 824], 95.00th=[ 857], 00:20:34.672 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1614], 99.95th=[ 1827], 00:20:34.672 | 99.99th=[ 1991] 00:20:34.672 bw ( KiB/s): min=18020, max=19584, per=50.01%, avg=19132.84, stdev=541.75, samples=19 00:20:34.672 iops : min= 4505, max= 4896, avg=4783.21, stdev=135.44, samples=19 00:20:34.672 lat (usec) : 500=0.06%, 750=5.28%, 1000=92.44% 00:20:34.672 lat (msec) : 2=2.21%, 4=0.01%, 10=0.01% 00:20:34.672 cpu : usr=90.45%, sys=8.18%, ctx=32, majf=0, minf=0 00:20:34.672 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.672 issued rwts: total=47836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.672 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:34.672 filename1: (groupid=0, jobs=1): err= 0: pid=83291: Mon Jul 15 16:35:19 2024 00:20:34.672 read: IOPS=4781, BW=18.7MiB/s (19.6MB/s)(187MiB/10001msec) 00:20:34.672 slat (usec): min=5, max=480, avg=13.47, stdev= 4.71 00:20:34.672 clat (usec): min=441, max=4160, avg=800.29, stdev=76.60 00:20:34.672 lat (usec): min=451, max=4186, avg=813.77, stdev=77.26 00:20:34.672 clat percentiles (usec): 00:20:34.672 | 1.00th=[ 693], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 766], 00:20:34.672 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:20:34.672 | 70.00th=[ 816], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 873], 00:20:34.672 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1729], 99.95th=[ 1844], 00:20:34.672 | 99.99th=[ 2040] 00:20:34.672 bw ( KiB/s): min=18016, max=19584, per=49.98%, avg=19122.53, stdev=536.24, samples=19 00:20:34.672 iops : min= 4504, max= 4896, avg=4780.63, stdev=134.06, samples=19 00:20:34.672 lat (usec) : 500=0.02%, 750=14.15%, 1000=83.41% 00:20:34.672 lat (msec) : 2=2.40%, 4=0.01%, 10=0.01% 00:20:34.672 cpu : usr=89.68%, sys=8.74%, ctx=100, majf=0, minf=9 00:20:34.672 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.673 issued rwts: total=47816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.673 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:34.673 00:20:34.673 Run status group 0 (all jobs): 00:20:34.673 READ: bw=37.4MiB/s (39.2MB/s), 18.7MiB/s-18.7MiB/s (19.6MB/s-19.6MB/s), io=374MiB (392MB), run=10001-10001msec 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 00:20:34.673 real 0m11.131s 00:20:34.673 user 0m18.751s 00:20:34.673 sys 0m1.969s 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 ************************************ 00:20:34.673 END TEST fio_dif_1_multi_subsystems 00:20:34.673 ************************************ 00:20:34.673 16:35:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:34.673 16:35:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:34.673 16:35:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:34.673 16:35:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 ************************************ 00:20:34.673 START TEST fio_dif_rand_params 00:20:34.673 ************************************ 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 bdev_null0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.673 [2024-07-15 16:35:19.949665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.673 { 00:20:34.673 "params": { 00:20:34.673 "name": "Nvme$subsystem", 00:20:34.673 "trtype": "$TEST_TRANSPORT", 00:20:34.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.673 "adrfam": "ipv4", 00:20:34.673 "trsvcid": "$NVMF_PORT", 00:20:34.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.673 "hdgst": ${hdgst:-false}, 00:20:34.673 "ddgst": ${ddgst:-false} 00:20:34.673 }, 00:20:34.673 "method": "bdev_nvme_attach_controller" 00:20:34.673 } 00:20:34.673 EOF 00:20:34.673 )") 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:34.673 "params": { 00:20:34.673 "name": "Nvme0", 00:20:34.673 "trtype": "tcp", 00:20:34.673 "traddr": "10.0.0.2", 00:20:34.673 "adrfam": "ipv4", 00:20:34.673 "trsvcid": "4420", 00:20:34.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.673 "hdgst": false, 00:20:34.673 "ddgst": false 00:20:34.673 }, 00:20:34.673 "method": "bdev_nvme_attach_controller" 00:20:34.673 }' 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:34.673 16:35:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:34.673 16:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:34.673 16:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:34.673 16:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.673 16:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.673 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:34.673 ... 00:20:34.673 fio-3.35 00:20:34.673 Starting 3 threads 00:20:41.276 00:20:41.276 filename0: (groupid=0, jobs=1): err= 0: pid=83448: Mon Jul 15 16:35:25 2024 00:20:41.276 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(163MiB/5010msec) 00:20:41.276 slat (nsec): min=5494, max=57283, avg=15416.14, stdev=5042.34 00:20:41.276 clat (usec): min=11329, max=14108, avg=11481.26, stdev=147.75 00:20:41.276 lat (usec): min=11343, max=14138, avg=11496.67, stdev=148.36 00:20:41.276 clat percentiles (usec): 00:20:41.276 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:41.276 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:41.276 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11600], 00:20:41.276 | 99.00th=[11863], 99.50th=[11863], 99.90th=[14091], 99.95th=[14091], 00:20:41.276 | 99.99th=[14091] 00:20:41.276 bw ( KiB/s): min=33024, max=33792, per=33.29%, avg=33297.50, stdev=360.65, samples=10 00:20:41.276 iops : min= 258, max= 264, avg=260.00, stdev= 2.71, samples=10 00:20:41.276 lat (msec) : 20=100.00% 00:20:41.276 cpu : usr=90.80%, sys=8.66%, ctx=6, majf=0, minf=9 00:20:41.276 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.276 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.276 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:41.276 filename0: (groupid=0, jobs=1): err= 0: pid=83449: Mon Jul 15 16:35:25 2024 00:20:41.276 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5006msec) 00:20:41.276 slat (nsec): min=7810, max=41898, avg=15240.32, stdev=4814.36 00:20:41.276 clat (usec): min=8981, max=14385, avg=11473.17, stdev=198.96 00:20:41.276 lat (usec): min=8989, max=14410, avg=11488.41, stdev=199.73 00:20:41.276 clat percentiles (usec): 00:20:41.276 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11469], 20.00th=[11469], 00:20:41.276 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:41.276 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11600], 00:20:41.276 | 99.00th=[11863], 99.50th=[11994], 99.90th=[14353], 99.95th=[14353], 00:20:41.276 | 99.99th=[14353] 00:20:41.276 bw ( KiB/s): min=33024, max=33792, per=33.30%, avg=33310.80, stdev=370.78, samples=10 00:20:41.276 iops : min= 258, max= 264, avg=260.10, stdev= 2.73, samples=10 00:20:41.276 lat (msec) : 10=0.23%, 20=99.77% 00:20:41.276 cpu : usr=91.57%, sys=7.93%, ctx=9, majf=0, minf=9 00:20:41.276 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.276 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.276 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:41.276 filename0: (groupid=0, jobs=1): err= 0: pid=83450: Mon Jul 15 16:35:25 2024 00:20:41.276 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5008msec) 00:20:41.276 slat (nsec): min=7773, max=57303, avg=16024.32, stdev=4630.96 00:20:41.276 clat (usec): min=11327, max=12311, avg=11475.60, stdev=84.83 00:20:41.276 lat (usec): min=11340, max=12332, avg=11491.62, stdev=85.64 00:20:41.276 clat percentiles (usec): 00:20:41.276 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:20:41.276 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:41.276 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11600], 00:20:41.276 | 99.00th=[11863], 99.50th=[11863], 99.90th=[12256], 99.95th=[12256], 00:20:41.276 | 99.99th=[12256] 00:20:41.276 bw ( KiB/s): min=33024, max=33792, per=33.30%, avg=33304.10, stdev=355.66, samples=10 00:20:41.276 iops : min= 258, max= 264, avg=260.00, stdev= 2.71, samples=10 00:20:41.276 lat (msec) : 20=100.00% 00:20:41.276 cpu : usr=91.71%, sys=7.77%, ctx=7, majf=0, minf=9 00:20:41.276 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.276 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.276 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:41.276 00:20:41.276 Run status group 0 (all jobs): 00:20:41.276 READ: bw=97.7MiB/s (102MB/s), 32.6MiB/s-32.6MiB/s (34.1MB/s-34.2MB/s), io=489MiB (513MB), run=5006-5010msec 00:20:41.276 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:41.276 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:41.276 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.276 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 bdev_null0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 [2024-07-15 16:35:25.965896] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 bdev_null1 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 bdev_null2 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.277 { 00:20:41.277 "params": { 00:20:41.277 "name": "Nvme$subsystem", 00:20:41.277 "trtype": "$TEST_TRANSPORT", 00:20:41.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.277 "adrfam": "ipv4", 00:20:41.277 "trsvcid": "$NVMF_PORT", 00:20:41.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.277 "hdgst": ${hdgst:-false}, 00:20:41.277 "ddgst": ${ddgst:-false} 00:20:41.277 }, 00:20:41.277 "method": "bdev_nvme_attach_controller" 00:20:41.277 } 00:20:41.277 EOF 00:20:41.277 )") 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.277 { 00:20:41.277 "params": { 00:20:41.277 "name": "Nvme$subsystem", 00:20:41.277 "trtype": "$TEST_TRANSPORT", 00:20:41.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.277 "adrfam": "ipv4", 00:20:41.277 "trsvcid": "$NVMF_PORT", 00:20:41.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.277 "hdgst": ${hdgst:-false}, 00:20:41.277 "ddgst": ${ddgst:-false} 00:20:41.277 }, 00:20:41.277 "method": "bdev_nvme_attach_controller" 00:20:41.277 } 00:20:41.277 EOF 00:20:41.277 )") 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.277 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.278 { 00:20:41.278 "params": { 00:20:41.278 "name": "Nvme$subsystem", 00:20:41.278 "trtype": "$TEST_TRANSPORT", 00:20:41.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.278 "adrfam": "ipv4", 00:20:41.278 "trsvcid": "$NVMF_PORT", 00:20:41.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.278 "hdgst": ${hdgst:-false}, 00:20:41.278 "ddgst": ${ddgst:-false} 00:20:41.278 }, 00:20:41.278 "method": "bdev_nvme_attach_controller" 00:20:41.278 } 00:20:41.278 EOF 00:20:41.278 )") 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:41.278 "params": { 00:20:41.278 "name": "Nvme0", 00:20:41.278 "trtype": "tcp", 00:20:41.278 "traddr": "10.0.0.2", 00:20:41.278 "adrfam": "ipv4", 00:20:41.278 "trsvcid": "4420", 00:20:41.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:41.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:41.278 "hdgst": false, 00:20:41.278 "ddgst": false 00:20:41.278 }, 00:20:41.278 "method": "bdev_nvme_attach_controller" 00:20:41.278 },{ 00:20:41.278 "params": { 00:20:41.278 "name": "Nvme1", 00:20:41.278 "trtype": "tcp", 00:20:41.278 "traddr": "10.0.0.2", 00:20:41.278 "adrfam": "ipv4", 00:20:41.278 "trsvcid": "4420", 00:20:41.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.278 "hdgst": false, 00:20:41.278 "ddgst": false 00:20:41.278 }, 00:20:41.278 "method": "bdev_nvme_attach_controller" 00:20:41.278 },{ 00:20:41.278 "params": { 00:20:41.278 "name": "Nvme2", 00:20:41.278 "trtype": "tcp", 00:20:41.278 "traddr": "10.0.0.2", 00:20:41.278 "adrfam": "ipv4", 00:20:41.278 "trsvcid": "4420", 00:20:41.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:41.278 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.278 "hdgst": false, 00:20:41.278 "ddgst": false 00:20:41.278 }, 00:20:41.278 "method": "bdev_nvme_attach_controller" 00:20:41.278 }' 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.278 16:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.278 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:41.278 ... 00:20:41.278 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:41.278 ... 00:20:41.278 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:41.278 ... 00:20:41.278 fio-3.35 00:20:41.278 Starting 24 threads 00:20:56.263 00:20:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=83545: Mon Jul 15 16:35:40 2024 00:20:56.263 read: IOPS=240, BW=962KiB/s (985kB/s)(9652KiB/10031msec) 00:20:56.263 slat (usec): min=4, max=9023, avg=21.88, stdev=216.71 00:20:56.263 clat (msec): min=25, max=127, avg=66.40, stdev=20.96 00:20:56.263 lat (msec): min=25, max=127, avg=66.42, stdev=20.96 00:20:56.263 clat percentiles (msec): 00:20:56.263 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:20:56.263 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 71], 00:20:56.263 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 110], 00:20:56.263 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:20:56.263 | 99.99th=[ 128] 00:20:56.263 bw ( KiB/s): min= 664, max= 1238, per=2.58%, avg=960.42, stdev=170.82, samples=19 00:20:56.263 iops : min= 166, max= 309, avg=240.05, stdev=42.65, samples=19 00:20:56.263 lat (msec) : 50=25.82%, 100=65.19%, 250=8.99% 00:20:56.263 cpu : usr=41.67%, sys=2.72%, ctx=1331, majf=0, minf=9 00:20:56.263 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 issued rwts: total=2413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=83546: Mon Jul 15 16:35:40 2024 00:20:56.263 read: IOPS=240, BW=964KiB/s (987kB/s)(9672KiB/10034msec) 00:20:56.263 slat (usec): min=5, max=8038, avg=28.07, stdev=300.71 00:20:56.263 clat (msec): min=22, max=130, avg=66.17, stdev=20.54 00:20:56.263 lat (msec): min=22, max=130, avg=66.20, stdev=20.54 00:20:56.263 clat percentiles (msec): 00:20:56.263 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:20:56.263 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 71], 00:20:56.263 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 109], 00:20:56.263 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 131], 00:20:56.263 | 99.99th=[ 131] 00:20:56.263 bw ( KiB/s): min= 664, max= 1208, per=2.59%, avg=963.00, stdev=157.96, samples=20 00:20:56.263 iops : min= 166, max= 302, avg=240.70, stdev=39.44, samples=20 00:20:56.263 lat (msec) : 50=26.34%, 100=65.26%, 250=8.40% 00:20:56.263 cpu : usr=40.58%, sys=2.54%, ctx=1274, majf=0, minf=9 00:20:56.263 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=83547: Mon Jul 15 16:35:40 2024 00:20:56.263 read: IOPS=211, BW=848KiB/s (868kB/s)(8512KiB/10043msec) 00:20:56.263 slat (usec): min=6, max=4030, avg=16.90, stdev=123.15 00:20:56.263 clat (msec): min=2, max=163, avg=75.31, stdev=27.23 00:20:56.263 lat (msec): min=2, max=163, avg=75.33, stdev=27.23 00:20:56.263 clat percentiles (msec): 00:20:56.263 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 48], 20.00th=[ 59], 00:20:56.263 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:20:56.263 | 70.00th=[ 86], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 118], 00:20:56.263 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 163], 00:20:56.263 | 99.99th=[ 163] 00:20:56.263 bw ( KiB/s): min= 512, max= 1920, per=2.27%, avg=844.40, stdev=297.03, samples=20 00:20:56.263 iops : min= 128, max= 480, avg=211.10, stdev=74.26, samples=20 00:20:56.263 lat (msec) : 4=2.26%, 10=1.50%, 20=1.41%, 50=7.14%, 100=67.01% 00:20:56.263 lat (msec) : 250=20.68% 00:20:56.263 cpu : usr=43.88%, sys=3.10%, ctx=1538, majf=0, minf=0 00:20:56.263 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:20:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 issued rwts: total=2128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=83548: Mon Jul 15 16:35:40 2024 00:20:56.263 read: IOPS=964, BW=3857KiB/s (3950kB/s)(37.7MiB/10002msec) 00:20:56.263 slat (usec): min=3, max=8035, avg=21.01, stdev=86.04 00:20:56.263 clat (msec): min=2, max=151, avg=16.50, stdev=25.63 00:20:56.263 lat (msec): min=2, max=151, avg=16.52, stdev=25.63 00:20:56.263 clat percentiles (msec): 00:20:56.263 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:20:56.263 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:20:56.263 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 63], 95.00th=[ 79], 00:20:56.263 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 146], 00:20:56.263 | 99.99th=[ 153] 00:20:56.263 bw ( KiB/s): min= 640, max= 9728, per=9.89%, avg=3679.74, stdev=4040.64, samples=19 00:20:56.263 iops : min= 160, max= 2432, avg=919.89, stdev=1010.17, samples=19 00:20:56.263 lat (msec) : 4=2.86%, 10=82.15%, 20=1.11%, 50=1.99%, 100=9.05% 00:20:56.263 lat (msec) : 250=2.84% 00:20:56.263 cpu : usr=60.23%, sys=3.69%, ctx=1026, majf=0, minf=9 00:20:56.263 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 issued rwts: total=9645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=83549: Mon Jul 15 16:35:40 2024 00:20:56.263 read: IOPS=241, BW=964KiB/s (987kB/s)(9672KiB/10031msec) 00:20:56.263 slat (usec): min=6, max=8043, avg=38.22, stdev=430.60 00:20:56.263 clat (msec): min=22, max=131, avg=66.17, stdev=20.60 00:20:56.263 lat (msec): min=22, max=131, avg=66.21, stdev=20.60 00:20:56.263 clat percentiles (msec): 00:20:56.263 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:20:56.263 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 71], 00:20:56.263 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 109], 00:20:56.263 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:20:56.263 | 99.99th=[ 132] 00:20:56.263 bw ( KiB/s): min= 664, max= 1248, per=2.59%, avg=962.70, stdev=167.06, samples=20 00:20:56.263 iops : min= 166, max= 312, avg=240.60, stdev=41.65, samples=20 00:20:56.263 lat (msec) : 50=29.03%, 100=62.32%, 250=8.64% 00:20:56.263 cpu : usr=31.57%, sys=1.99%, ctx=1002, majf=0, minf=9 00:20:56.263 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=83550: Mon Jul 15 16:35:40 2024 00:20:56.263 read: IOPS=239, BW=957KiB/s (980kB/s)(9600KiB/10033msec) 00:20:56.263 slat (usec): min=3, max=8023, avg=21.05, stdev=200.28 00:20:56.263 clat (msec): min=13, max=128, avg=66.74, stdev=21.66 00:20:56.263 lat (msec): min=13, max=128, avg=66.76, stdev=21.66 00:20:56.263 clat percentiles (msec): 00:20:56.263 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:20:56.263 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:20:56.263 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 111], 00:20:56.263 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 129], 00:20:56.263 | 99.99th=[ 129] 00:20:56.263 bw ( KiB/s): min= 640, max= 1416, per=2.56%, avg=953.60, stdev=196.79, samples=20 00:20:56.263 iops : min= 160, max= 354, avg=238.40, stdev=49.20, samples=20 00:20:56.263 lat (msec) : 20=0.67%, 50=25.79%, 100=64.25%, 250=9.29% 00:20:56.263 cpu : usr=37.95%, sys=2.39%, ctx=1066, majf=0, minf=9 00:20:56.263 IO depths : 1=0.1%, 2=0.4%, 4=1.1%, 8=81.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.263 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=83551: Mon Jul 15 16:35:40 2024 00:20:56.263 read: IOPS=286, BW=1147KiB/s (1174kB/s)(11.2MiB/10007msec) 00:20:56.263 slat (usec): min=3, max=4024, avg=15.82, stdev=75.02 00:20:56.263 clat (msec): min=5, max=131, avg=55.73, stdev=29.41 00:20:56.263 lat (msec): min=5, max=131, avg=55.75, stdev=29.41 00:20:56.263 clat percentiles (msec): 00:20:56.263 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 36], 00:20:56.264 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 60], 60.00th=[ 64], 00:20:56.264 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 93], 95.00th=[ 108], 00:20:56.264 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:56.264 | 99.99th=[ 132] 00:20:56.264 bw ( KiB/s): min= 712, max= 1232, per=2.57%, avg=956.21, stdev=169.19, samples=19 00:20:56.264 iops : min= 178, max= 308, avg=239.05, stdev=42.30, samples=19 00:20:56.264 lat (msec) : 10=17.08%, 20=0.98%, 50=24.15%, 100=51.06%, 250=6.73% 00:20:56.264 cpu : usr=33.01%, sys=2.10%, ctx=904, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename0: (groupid=0, jobs=1): err= 0: pid=83552: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=234, BW=940KiB/s (962kB/s)(9424KiB/10028msec) 00:20:56.264 slat (usec): min=5, max=3557, avg=17.55, stdev=93.33 00:20:56.264 clat (msec): min=26, max=131, avg=67.93, stdev=20.93 00:20:56.264 lat (msec): min=26, max=131, avg=67.95, stdev=20.93 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:20:56.264 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:20:56.264 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 109], 00:20:56.264 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:20:56.264 | 99.99th=[ 132] 00:20:56.264 bw ( KiB/s): min= 664, max= 1192, per=2.52%, avg=936.37, stdev=167.72, samples=19 00:20:56.264 iops : min= 166, max= 298, avg=234.05, stdev=41.91, samples=19 00:20:56.264 lat (msec) : 50=27.25%, 100=62.99%, 250=9.76% 00:20:56.264 cpu : usr=33.19%, sys=2.23%, ctx=1004, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename1: (groupid=0, jobs=1): err= 0: pid=83553: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=231, BW=927KiB/s (950kB/s)(9304KiB/10033msec) 00:20:56.264 slat (nsec): min=3805, max=62740, avg=14134.86, stdev=5034.59 00:20:56.264 clat (msec): min=12, max=143, avg=68.87, stdev=21.55 00:20:56.264 lat (msec): min=12, max=143, avg=68.89, stdev=21.55 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 23], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 49], 00:20:56.264 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:56.264 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 109], 00:20:56.264 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:20:56.264 | 99.99th=[ 144] 00:20:56.264 bw ( KiB/s): min= 640, max= 1264, per=2.49%, avg=926.40, stdev=182.90, samples=20 00:20:56.264 iops : min= 160, max= 316, avg=231.60, stdev=45.73, samples=20 00:20:56.264 lat (msec) : 20=0.60%, 50=21.71%, 100=66.85%, 250=10.83% 00:20:56.264 cpu : usr=34.84%, sys=2.40%, ctx=1153, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=1.0%, 4=3.7%, 8=79.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename1: (groupid=0, jobs=1): err= 0: pid=83554: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=241, BW=965KiB/s (988kB/s)(9680KiB/10035msec) 00:20:56.264 slat (usec): min=4, max=8064, avg=21.61, stdev=200.13 00:20:56.264 clat (msec): min=22, max=129, avg=66.18, stdev=20.31 00:20:56.264 lat (msec): min=22, max=129, avg=66.21, stdev=20.31 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:20:56.264 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:20:56.264 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 110], 00:20:56.264 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 130], 00:20:56.264 | 99.99th=[ 130] 00:20:56.264 bw ( KiB/s): min= 664, max= 1280, per=2.60%, avg=966.42, stdev=168.63, samples=19 00:20:56.264 iops : min= 166, max= 320, avg=241.58, stdev=42.15, samples=19 00:20:56.264 lat (msec) : 50=23.51%, 100=67.77%, 250=8.72% 00:20:56.264 cpu : usr=42.46%, sys=2.75%, ctx=1281, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=2420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename1: (groupid=0, jobs=1): err= 0: pid=83555: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=241, BW=967KiB/s (990kB/s)(9704KiB/10035msec) 00:20:56.264 slat (usec): min=6, max=8021, avg=25.79, stdev=245.70 00:20:56.264 clat (msec): min=23, max=127, avg=66.00, stdev=20.69 00:20:56.264 lat (msec): min=23, max=127, avg=66.03, stdev=20.69 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 00:20:56.264 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 71], 00:20:56.264 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 109], 00:20:56.264 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:20:56.264 | 99.99th=[ 128] 00:20:56.264 bw ( KiB/s): min= 664, max= 1224, per=2.60%, avg=966.20, stdev=162.68, samples=20 00:20:56.264 iops : min= 166, max= 306, avg=241.50, stdev=40.61, samples=20 00:20:56.264 lat (msec) : 50=26.46%, 100=64.55%, 250=8.99% 00:20:56.264 cpu : usr=41.56%, sys=2.50%, ctx=1261, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=2426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename1: (groupid=0, jobs=1): err= 0: pid=83556: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=964, BW=3858KiB/s (3950kB/s)(37.7MiB/10007msec) 00:20:56.264 slat (usec): min=3, max=8023, avg=18.16, stdev=110.82 00:20:56.264 clat (msec): min=2, max=167, avg=16.51, stdev=24.88 00:20:56.264 lat (msec): min=2, max=167, avg=16.53, stdev=24.88 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:20:56.264 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:20:56.264 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 61], 95.00th=[ 79], 00:20:56.264 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 132], 00:20:56.264 | 99.99th=[ 169] 00:20:56.264 bw ( KiB/s): min= 664, max= 9616, per=9.92%, avg=3688.37, stdev=4001.18, samples=19 00:20:56.264 iops : min= 166, max= 2404, avg=922.05, stdev=1000.29, samples=19 00:20:56.264 lat (msec) : 4=2.66%, 10=81.65%, 20=1.15%, 50=3.29%, 100=8.93% 00:20:56.264 lat (msec) : 250=2.31% 00:20:56.264 cpu : usr=53.80%, sys=3.49%, ctx=733, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=83.2%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=9651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename1: (groupid=0, jobs=1): err= 0: pid=83557: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=218, BW=875KiB/s (896kB/s)(8780KiB/10039msec) 00:20:56.264 slat (usec): min=3, max=8035, avg=20.99, stdev=241.98 00:20:56.264 clat (msec): min=23, max=146, avg=73.01, stdev=23.10 00:20:56.264 lat (msec): min=23, max=146, avg=73.03, stdev=23.09 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 51], 00:20:56.264 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:20:56.264 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 110], 95.00th=[ 118], 00:20:56.264 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 140], 00:20:56.264 | 99.99th=[ 146] 00:20:56.264 bw ( KiB/s): min= 624, max= 1232, per=2.34%, avg=871.70, stdev=208.97, samples=20 00:20:56.264 iops : min= 156, max= 308, avg=217.90, stdev=52.20, samples=20 00:20:56.264 lat (msec) : 50=19.13%, 100=65.88%, 250=14.99% 00:20:56.264 cpu : usr=34.56%, sys=2.28%, ctx=1007, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=3.1%, 4=12.6%, 8=69.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=90.9%, 8=6.3%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=2195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename1: (groupid=0, jobs=1): err= 0: pid=83558: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=223, BW=892KiB/s (914kB/s)(8960KiB/10041msec) 00:20:56.264 slat (usec): min=8, max=8028, avg=21.25, stdev=239.38 00:20:56.264 clat (msec): min=6, max=155, avg=71.52, stdev=22.22 00:20:56.264 lat (msec): min=6, max=155, avg=71.54, stdev=22.22 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 12], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 51], 00:20:56.264 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:20:56.264 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 115], 00:20:56.264 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 146], 00:20:56.264 | 99.99th=[ 157] 00:20:56.264 bw ( KiB/s): min= 512, max= 1392, per=2.39%, avg=889.60, stdev=196.96, samples=20 00:20:56.264 iops : min= 128, max= 348, avg=222.40, stdev=49.24, samples=20 00:20:56.264 lat (msec) : 10=0.62%, 20=0.71%, 50=17.59%, 100=69.64%, 250=11.43% 00:20:56.264 cpu : usr=31.08%, sys=2.13%, ctx=894, majf=0, minf=9 00:20:56.264 IO depths : 1=0.1%, 2=1.9%, 4=7.8%, 8=74.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:56.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 complete : 0=0.0%, 4=89.9%, 8=8.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.264 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.264 filename1: (groupid=0, jobs=1): err= 0: pid=83559: Mon Jul 15 16:35:40 2024 00:20:56.264 read: IOPS=237, BW=951KiB/s (974kB/s)(9544KiB/10034msec) 00:20:56.264 slat (usec): min=4, max=8032, avg=30.64, stdev=338.13 00:20:56.264 clat (msec): min=25, max=131, avg=67.10, stdev=20.45 00:20:56.264 lat (msec): min=25, max=131, avg=67.13, stdev=20.45 00:20:56.264 clat percentiles (msec): 00:20:56.264 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:20:56.264 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 00:20:56.264 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:20:56.264 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 130], 00:20:56.265 | 99.99th=[ 132] 00:20:56.265 bw ( KiB/s): min= 712, max= 1240, per=2.55%, avg=950.80, stdev=148.79, samples=20 00:20:56.265 iops : min= 178, max= 310, avg=237.70, stdev=37.20, samples=20 00:20:56.265 lat (msec) : 50=27.45%, 100=63.45%, 250=9.09% 00:20:56.265 cpu : usr=31.30%, sys=1.96%, ctx=958, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename1: (groupid=0, jobs=1): err= 0: pid=83560: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=225, BW=903KiB/s (924kB/s)(9060KiB/10037msec) 00:20:56.265 slat (usec): min=4, max=8030, avg=28.49, stdev=291.39 00:20:56.265 clat (msec): min=7, max=147, avg=70.72, stdev=21.60 00:20:56.265 lat (msec): min=7, max=147, avg=70.75, stdev=21.61 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 9], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 52], 00:20:56.265 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 74], 00:20:56.265 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 110], 00:20:56.265 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 130], 99.95th=[ 133], 00:20:56.265 | 99.99th=[ 148] 00:20:56.265 bw ( KiB/s): min= 640, max= 1408, per=2.42%, avg=899.60, stdev=184.47, samples=20 00:20:56.265 iops : min= 160, max= 352, avg=224.90, stdev=46.12, samples=20 00:20:56.265 lat (msec) : 10=1.24%, 20=0.88%, 50=14.66%, 100=72.10%, 250=11.13% 00:20:56.265 cpu : usr=38.13%, sys=2.31%, ctx=1154, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=1.9%, 4=7.8%, 8=74.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename2: (groupid=0, jobs=1): err= 0: pid=83561: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=235, BW=942KiB/s (965kB/s)(9452KiB/10032msec) 00:20:56.265 slat (usec): min=4, max=8024, avg=18.37, stdev=164.84 00:20:56.265 clat (msec): min=13, max=156, avg=67.81, stdev=21.73 00:20:56.265 lat (msec): min=13, max=156, avg=67.82, stdev=21.73 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:20:56.265 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:20:56.265 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 105], 95.00th=[ 109], 00:20:56.265 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 134], 00:20:56.265 | 99.99th=[ 157] 00:20:56.265 bw ( KiB/s): min= 616, max= 1360, per=2.53%, avg=940.40, stdev=191.13, samples=20 00:20:56.265 iops : min= 154, max= 340, avg=235.10, stdev=47.78, samples=20 00:20:56.265 lat (msec) : 20=1.27%, 50=24.55%, 100=63.86%, 250=10.33% 00:20:56.265 cpu : usr=31.49%, sys=2.26%, ctx=910, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.2%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=88.1%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename2: (groupid=0, jobs=1): err= 0: pid=83562: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=961, BW=3845KiB/s (3937kB/s)(37.6MiB/10005msec) 00:20:56.265 slat (usec): min=3, max=8026, avg=22.29, stdev=136.01 00:20:56.265 clat (msec): min=2, max=161, avg=16.56, stdev=25.64 00:20:56.265 lat (msec): min=2, max=161, avg=16.58, stdev=25.65 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:20:56.265 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:20:56.265 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 63], 95.00th=[ 82], 00:20:56.265 | 99.00th=[ 117], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 140], 00:20:56.265 | 99.99th=[ 161] 00:20:56.265 bw ( KiB/s): min= 640, max= 9648, per=9.87%, avg=3669.47, stdev=4026.87, samples=19 00:20:56.265 iops : min= 160, max= 2412, avg=917.37, stdev=1006.72, samples=19 00:20:56.265 lat (msec) : 4=2.70%, 10=82.36%, 20=1.03%, 50=2.55%, 100=8.81% 00:20:56.265 lat (msec) : 250=2.55% 00:20:56.265 cpu : usr=54.10%, sys=3.22%, ctx=1576, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=9617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename2: (groupid=0, jobs=1): err= 0: pid=83563: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=954, BW=3819KiB/s (3911kB/s)(37.3MiB/10008msec) 00:20:56.265 slat (usec): min=3, max=8025, avg=16.06, stdev=111.46 00:20:56.265 clat (msec): min=2, max=163, avg=16.69, stdev=25.57 00:20:56.265 lat (msec): min=2, max=163, avg=16.70, stdev=25.57 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:20:56.265 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:20:56.265 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 62], 95.00th=[ 81], 00:20:56.265 | 99.00th=[ 114], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:20:56.265 | 99.99th=[ 163] 00:20:56.265 bw ( KiB/s): min= 632, max= 9584, per=9.82%, avg=3653.74, stdev=4004.30, samples=19 00:20:56.265 iops : min= 158, max= 2396, avg=913.42, stdev=1001.08, samples=19 00:20:56.265 lat (msec) : 4=2.54%, 10=82.17%, 20=1.17%, 50=2.27%, 100=9.21% 00:20:56.265 lat (msec) : 250=2.64% 00:20:56.265 cpu : usr=61.64%, sys=4.05%, ctx=920, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=9555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename2: (groupid=0, jobs=1): err= 0: pid=83564: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=239, BW=957KiB/s (980kB/s)(9600KiB/10032msec) 00:20:56.265 slat (usec): min=3, max=8031, avg=22.60, stdev=231.32 00:20:56.265 clat (msec): min=26, max=130, avg=66.74, stdev=20.37 00:20:56.265 lat (msec): min=26, max=130, avg=66.76, stdev=20.37 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 48], 00:20:56.265 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 72], 00:20:56.265 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:20:56.265 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 131], 00:20:56.265 | 99.99th=[ 131] 00:20:56.265 bw ( KiB/s): min= 720, max= 1240, per=2.57%, avg=956.63, stdev=153.91, samples=19 00:20:56.265 iops : min= 180, max= 310, avg=239.16, stdev=38.48, samples=19 00:20:56.265 lat (msec) : 50=27.33%, 100=64.25%, 250=8.42% 00:20:56.265 cpu : usr=32.33%, sys=2.08%, ctx=936, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename2: (groupid=0, jobs=1): err= 0: pid=83565: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=237, BW=950KiB/s (973kB/s)(9528KiB/10026msec) 00:20:56.265 slat (usec): min=4, max=4044, avg=19.96, stdev=142.70 00:20:56.265 clat (msec): min=21, max=130, avg=67.16, stdev=20.84 00:20:56.265 lat (msec): min=21, max=130, avg=67.18, stdev=20.84 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:20:56.265 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:20:56.265 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 109], 00:20:56.265 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:20:56.265 | 99.99th=[ 131] 00:20:56.265 bw ( KiB/s): min= 640, max= 1280, per=2.55%, avg=948.80, stdev=184.69, samples=20 00:20:56.265 iops : min= 160, max= 320, avg=237.20, stdev=46.17, samples=20 00:20:56.265 lat (msec) : 50=25.31%, 100=66.29%, 250=8.40% 00:20:56.265 cpu : usr=37.91%, sys=2.57%, ctx=991, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=2382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename2: (groupid=0, jobs=1): err= 0: pid=83566: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=961, BW=3847KiB/s (3939kB/s)(37.6MiB/10003msec) 00:20:56.265 slat (usec): min=3, max=7023, avg=21.34, stdev=83.01 00:20:56.265 clat (msec): min=2, max=150, avg=16.55, stdev=26.10 00:20:56.265 lat (msec): min=2, max=150, avg=16.57, stdev=26.10 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:20:56.265 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:20:56.265 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 65], 95.00th=[ 81], 00:20:56.265 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 146], 99.95th=[ 146], 00:20:56.265 | 99.99th=[ 150] 00:20:56.265 bw ( KiB/s): min= 640, max= 9728, per=9.86%, avg=3668.53, stdev=4060.15, samples=19 00:20:56.265 iops : min= 160, max= 2432, avg=917.11, stdev=1015.04, samples=19 00:20:56.265 lat (msec) : 4=2.88%, 10=82.55%, 20=1.05%, 50=1.84%, 100=8.66% 00:20:56.265 lat (msec) : 250=3.02% 00:20:56.265 cpu : usr=60.51%, sys=3.45%, ctx=962, majf=0, minf=9 00:20:56.265 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:56.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.265 issued rwts: total=9620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.265 filename2: (groupid=0, jobs=1): err= 0: pid=83567: Mon Jul 15 16:35:40 2024 00:20:56.265 read: IOPS=248, BW=993KiB/s (1017kB/s)(9976KiB/10044msec) 00:20:56.265 slat (usec): min=3, max=9023, avg=27.89, stdev=261.11 00:20:56.265 clat (msec): min=19, max=129, avg=64.25, stdev=21.46 00:20:56.265 lat (msec): min=19, max=129, avg=64.28, stdev=21.47 00:20:56.265 clat percentiles (msec): 00:20:56.265 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:20:56.265 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 70], 00:20:56.265 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 94], 95.00th=[ 110], 00:20:56.265 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 130], 00:20:56.265 | 99.99th=[ 130] 00:20:56.266 bw ( KiB/s): min= 664, max= 1368, per=2.67%, avg=991.20, stdev=202.73, samples=20 00:20:56.266 iops : min= 166, max= 342, avg=247.80, stdev=50.68, samples=20 00:20:56.266 lat (msec) : 20=0.56%, 50=31.28%, 100=59.74%, 250=8.42% 00:20:56.266 cpu : usr=40.75%, sys=2.83%, ctx=1289, majf=0, minf=9 00:20:56.266 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:56.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.266 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.266 issued rwts: total=2494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.266 filename2: (groupid=0, jobs=1): err= 0: pid=83568: Mon Jul 15 16:35:40 2024 00:20:56.266 read: IOPS=238, BW=954KiB/s (977kB/s)(9580KiB/10045msec) 00:20:56.266 slat (usec): min=4, max=8035, avg=24.93, stdev=258.90 00:20:56.266 clat (msec): min=23, max=131, avg=66.93, stdev=20.66 00:20:56.266 lat (msec): min=23, max=131, avg=66.95, stdev=20.67 00:20:56.266 clat percentiles (msec): 00:20:56.266 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 48], 00:20:56.266 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 00:20:56.266 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 100], 95.00th=[ 111], 00:20:56.266 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:20:56.266 | 99.99th=[ 132] 00:20:56.266 bw ( KiB/s): min= 640, max= 1216, per=2.56%, avg=951.60, stdev=165.17, samples=20 00:20:56.266 iops : min= 160, max= 304, avg=237.90, stdev=41.29, samples=20 00:20:56.266 lat (msec) : 50=26.01%, 100=64.43%, 250=9.56% 00:20:56.266 cpu : usr=36.91%, sys=2.54%, ctx=1083, majf=0, minf=9 00:20:56.266 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:56.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.266 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.266 issued rwts: total=2395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.266 00:20:56.266 Run status group 0 (all jobs): 00:20:56.266 READ: bw=36.3MiB/s (38.1MB/s), 848KiB/s-3858KiB/s (868kB/s-3950kB/s), io=365MiB (382MB), run=10002-10045msec 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 bdev_null0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 [2024-07-15 16:35:40.957411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 bdev_null1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.266 { 00:20:56.266 "params": { 00:20:56.266 "name": "Nvme$subsystem", 00:20:56.266 "trtype": "$TEST_TRANSPORT", 00:20:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.266 "adrfam": "ipv4", 00:20:56.266 "trsvcid": "$NVMF_PORT", 00:20:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.266 "hdgst": ${hdgst:-false}, 00:20:56.266 "ddgst": ${ddgst:-false} 00:20:56.266 }, 00:20:56.266 "method": "bdev_nvme_attach_controller" 00:20:56.266 } 00:20:56.266 EOF 00:20:56.266 )") 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:56.266 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:56.267 16:35:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.267 { 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme$subsystem", 00:20:56.267 "trtype": "$TEST_TRANSPORT", 00:20:56.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "$NVMF_PORT", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.267 "hdgst": ${hdgst:-false}, 00:20:56.267 "ddgst": ${ddgst:-false} 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 } 00:20:56.267 EOF 00:20:56.267 )") 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme0", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 },{ 00:20:56.267 "params": { 00:20:56.267 "name": "Nvme1", 00:20:56.267 "trtype": "tcp", 00:20:56.267 "traddr": "10.0.0.2", 00:20:56.267 "adrfam": "ipv4", 00:20:56.267 "trsvcid": "4420", 00:20:56.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.267 "hdgst": false, 00:20:56.267 "ddgst": false 00:20:56.267 }, 00:20:56.267 "method": "bdev_nvme_attach_controller" 00:20:56.267 }' 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:56.267 16:35:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.267 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:56.267 ... 00:20:56.267 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:56.267 ... 00:20:56.267 fio-3.35 00:20:56.267 Starting 4 threads 00:21:01.531 00:21:01.531 filename0: (groupid=0, jobs=1): err= 0: pid=83738: Mon Jul 15 16:35:46 2024 00:21:01.531 read: IOPS=2103, BW=16.4MiB/s (17.2MB/s)(82.2MiB/5003msec) 00:21:01.531 slat (nsec): min=4784, max=57923, avg=12004.01, stdev=4240.43 00:21:01.531 clat (usec): min=634, max=9083, avg=3765.58, stdev=833.97 00:21:01.531 lat (usec): min=643, max=9113, avg=3777.58, stdev=833.96 00:21:01.531 clat percentiles (usec): 00:21:01.531 | 1.00th=[ 1418], 5.00th=[ 2212], 10.00th=[ 2933], 20.00th=[ 3294], 00:21:01.531 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3785], 60.00th=[ 3884], 00:21:01.531 | 70.00th=[ 4047], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5145], 00:21:01.531 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 6980], 99.95th=[ 8848], 00:21:01.531 | 99.99th=[ 8848] 00:21:01.531 bw ( KiB/s): min=14928, max=18304, per=25.42%, avg=16919.11, stdev=902.05, samples=9 00:21:01.531 iops : min= 1866, max= 2288, avg=2114.89, stdev=112.76, samples=9 00:21:01.531 lat (usec) : 750=0.10%, 1000=0.03% 00:21:01.531 lat (msec) : 2=3.04%, 4=66.00%, 10=30.83% 00:21:01.531 cpu : usr=91.09%, sys=7.98%, ctx=10, majf=0, minf=9 00:21:01.531 IO depths : 1=0.1%, 2=7.4%, 4=63.6%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 issued rwts: total=10526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.531 filename0: (groupid=0, jobs=1): err= 0: pid=83739: Mon Jul 15 16:35:46 2024 00:21:01.531 read: IOPS=2018, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5002msec) 00:21:01.531 slat (nsec): min=5529, max=80003, avg=15991.50, stdev=4183.12 00:21:01.531 clat (usec): min=1278, max=7000, avg=3912.64, stdev=743.08 00:21:01.531 lat (usec): min=1293, max=7019, avg=3928.64, stdev=742.59 00:21:01.531 clat percentiles (usec): 00:21:01.531 | 1.00th=[ 1926], 5.00th=[ 2999], 10.00th=[ 3261], 20.00th=[ 3294], 00:21:01.531 | 30.00th=[ 3326], 40.00th=[ 3752], 50.00th=[ 3851], 60.00th=[ 3916], 00:21:01.531 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5080], 00:21:01.531 | 99.00th=[ 5342], 99.50th=[ 5932], 99.90th=[ 6194], 99.95th=[ 6390], 00:21:01.531 | 99.99th=[ 6718] 00:21:01.531 bw ( KiB/s): min=13824, max=16928, per=24.13%, avg=16060.44, stdev=1090.17, samples=9 00:21:01.531 iops : min= 1728, max= 2116, avg=2007.56, stdev=136.27, samples=9 00:21:01.531 lat (msec) : 2=1.35%, 4=61.85%, 10=36.80% 00:21:01.531 cpu : usr=91.90%, sys=7.16%, ctx=9, majf=0, minf=9 00:21:01.531 IO depths : 1=0.1%, 2=10.2%, 4=62.0%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 issued rwts: total=10098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.531 filename1: (groupid=0, jobs=1): err= 0: pid=83740: Mon Jul 15 16:35:46 2024 00:21:01.531 read: IOPS=2106, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5003msec) 00:21:01.531 slat (nsec): min=5855, max=57833, avg=15148.58, stdev=4420.26 00:21:01.531 clat (usec): min=966, max=8190, avg=3752.79, stdev=866.29 00:21:01.531 lat (usec): min=974, max=8210, avg=3767.94, stdev=866.64 00:21:01.531 clat percentiles (usec): 00:21:01.531 | 1.00th=[ 1663], 5.00th=[ 2180], 10.00th=[ 2900], 20.00th=[ 3261], 00:21:01.531 | 30.00th=[ 3294], 40.00th=[ 3359], 50.00th=[ 3785], 60.00th=[ 3851], 00:21:01.531 | 70.00th=[ 4047], 80.00th=[ 4424], 90.00th=[ 5080], 95.00th=[ 5145], 00:21:01.531 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 6652], 99.95th=[ 7898], 00:21:01.531 | 99.99th=[ 7963] 00:21:01.531 bw ( KiB/s): min=15072, max=18752, per=25.30%, avg=16844.78, stdev=1084.29, samples=9 00:21:01.531 iops : min= 1884, max= 2344, avg=2105.56, stdev=135.50, samples=9 00:21:01.531 lat (usec) : 1000=0.03% 00:21:01.531 lat (msec) : 2=3.43%, 4=66.02%, 10=30.52% 00:21:01.531 cpu : usr=91.28%, sys=7.80%, ctx=5, majf=0, minf=9 00:21:01.531 IO depths : 1=0.1%, 2=6.9%, 4=63.6%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 complete : 0=0.0%, 4=97.3%, 8=2.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 issued rwts: total=10541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.531 filename1: (groupid=0, jobs=1): err= 0: pid=83741: Mon Jul 15 16:35:46 2024 00:21:01.531 read: IOPS=2092, BW=16.3MiB/s (17.1MB/s)(81.8MiB/5001msec) 00:21:01.531 slat (nsec): min=6033, max=55394, avg=15714.03, stdev=3884.54 00:21:01.531 clat (usec): min=516, max=6176, avg=3776.11, stdev=808.27 00:21:01.531 lat (usec): min=528, max=6189, avg=3791.82, stdev=807.99 00:21:01.531 clat percentiles (usec): 00:21:01.531 | 1.00th=[ 1614], 5.00th=[ 2212], 10.00th=[ 2966], 20.00th=[ 3261], 00:21:01.531 | 30.00th=[ 3294], 40.00th=[ 3425], 50.00th=[ 3785], 60.00th=[ 3884], 00:21:01.531 | 70.00th=[ 4113], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5080], 00:21:01.531 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 5997], 99.95th=[ 6128], 00:21:01.531 | 99.99th=[ 6194] 00:21:01.531 bw ( KiB/s): min=14877, max=18864, per=25.11%, avg=16714.33, stdev=1047.19, samples=9 00:21:01.531 iops : min= 1859, max= 2358, avg=2089.22, stdev=131.04, samples=9 00:21:01.531 lat (usec) : 750=0.02%, 1000=0.28% 00:21:01.531 lat (msec) : 2=3.14%, 4=64.82%, 10=31.74% 00:21:01.531 cpu : usr=92.18%, sys=6.94%, ctx=10, majf=0, minf=10 00:21:01.531 IO depths : 1=0.1%, 2=7.8%, 4=63.3%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.531 issued rwts: total=10465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.531 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.531 00:21:01.531 Run status group 0 (all jobs): 00:21:01.531 READ: bw=65.0MiB/s (68.2MB/s), 15.8MiB/s-16.5MiB/s (16.5MB/s-17.3MB/s), io=325MiB (341MB), run=5001-5003msec 00:21:01.531 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:01.531 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:01.531 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.531 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:01.531 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.532 ************************************ 00:21:01.532 END TEST fio_dif_rand_params 00:21:01.532 ************************************ 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.532 00:21:01.532 real 0m27.137s 00:21:01.532 user 2m23.773s 00:21:01.532 sys 0m9.983s 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.532 16:35:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.789 16:35:47 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:01.789 16:35:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:01.789 16:35:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:01.789 16:35:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.789 16:35:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:01.789 ************************************ 00:21:01.789 START TEST fio_dif_digest 00:21:01.789 ************************************ 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.789 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.790 bdev_null0 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.790 [2024-07-15 16:35:47.133839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.790 { 00:21:01.790 "params": { 00:21:01.790 "name": "Nvme$subsystem", 00:21:01.790 "trtype": "$TEST_TRANSPORT", 00:21:01.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.790 "adrfam": "ipv4", 00:21:01.790 "trsvcid": "$NVMF_PORT", 00:21:01.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.790 "hdgst": ${hdgst:-false}, 00:21:01.790 "ddgst": ${ddgst:-false} 00:21:01.790 }, 00:21:01.790 "method": "bdev_nvme_attach_controller" 00:21:01.790 } 00:21:01.790 EOF 00:21:01.790 )") 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:01.790 "params": { 00:21:01.790 "name": "Nvme0", 00:21:01.790 "trtype": "tcp", 00:21:01.790 "traddr": "10.0.0.2", 00:21:01.790 "adrfam": "ipv4", 00:21:01.790 "trsvcid": "4420", 00:21:01.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:01.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:01.790 "hdgst": true, 00:21:01.790 "ddgst": true 00:21:01.790 }, 00:21:01.790 "method": "bdev_nvme_attach_controller" 00:21:01.790 }' 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:01.790 16:35:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.790 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:01.790 ... 00:21:01.790 fio-3.35 00:21:01.790 Starting 3 threads 00:21:13.989 00:21:13.989 filename0: (groupid=0, jobs=1): err= 0: pid=83846: Mon Jul 15 16:35:57 2024 00:21:13.989 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10009msec) 00:21:13.989 slat (usec): min=6, max=229, avg=23.64, stdev=12.20 00:21:13.989 clat (usec): min=12545, max=15958, avg=13236.70, stdev=172.25 00:21:13.989 lat (usec): min=12554, max=15975, avg=13260.34, stdev=176.13 00:21:13.989 clat percentiles (usec): 00:21:13.989 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:21:13.989 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:21:13.989 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:13.989 | 99.00th=[13698], 99.50th=[13698], 99.90th=[15926], 99.95th=[15926], 00:21:13.989 | 99.99th=[15926] 00:21:13.989 bw ( KiB/s): min=28416, max=29184, per=33.32%, avg=28876.80, stdev=386.02, samples=20 00:21:13.989 iops : min= 222, max= 228, avg=225.60, stdev= 3.02, samples=20 00:21:13.989 lat (msec) : 20=100.00% 00:21:13.989 cpu : usr=92.37%, sys=6.72%, ctx=209, majf=0, minf=9 00:21:13.989 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:13.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.989 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.989 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:13.989 filename0: (groupid=0, jobs=1): err= 0: pid=83847: Mon Jul 15 16:35:57 2024 00:21:13.989 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10011msec) 00:21:13.989 slat (nsec): min=6740, max=74483, avg=21450.28, stdev=10446.02 00:21:13.989 clat (usec): min=12941, max=17694, avg=13243.54, stdev=216.27 00:21:13.989 lat (usec): min=12966, max=17722, avg=13264.99, stdev=219.21 00:21:13.989 clat percentiles (usec): 00:21:13.989 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:21:13.989 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:21:13.989 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:13.989 | 99.00th=[13698], 99.50th=[13698], 99.90th=[17695], 99.95th=[17695], 00:21:13.990 | 99.99th=[17695] 00:21:13.990 bw ( KiB/s): min=28416, max=29242, per=33.34%, avg=28893.90, stdev=383.67, samples=20 00:21:13.990 iops : min= 222, max= 228, avg=225.60, stdev= 3.02, samples=20 00:21:13.990 lat (msec) : 20=100.00% 00:21:13.990 cpu : usr=92.22%, sys=7.15%, ctx=7, majf=0, minf=0 00:21:13.990 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:13.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.990 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.990 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:13.990 filename0: (groupid=0, jobs=1): err= 0: pid=83848: Mon Jul 15 16:35:57 2024 00:21:13.990 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10009msec) 00:21:13.990 slat (nsec): min=7784, max=59720, avg=23444.59, stdev=10782.08 00:21:13.990 clat (usec): min=12936, max=16051, avg=13237.76, stdev=171.26 00:21:13.990 lat (usec): min=12950, max=16072, avg=13261.21, stdev=175.14 00:21:13.990 clat percentiles (usec): 00:21:13.990 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:21:13.990 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:21:13.990 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:13.990 | 99.00th=[13698], 99.50th=[13698], 99.90th=[16057], 99.95th=[16057], 00:21:13.990 | 99.99th=[16057] 00:21:13.990 bw ( KiB/s): min=28416, max=29184, per=33.32%, avg=28876.80, stdev=386.02, samples=20 00:21:13.990 iops : min= 222, max= 228, avg=225.60, stdev= 3.02, samples=20 00:21:13.990 lat (msec) : 20=100.00% 00:21:13.990 cpu : usr=93.08%, sys=6.41%, ctx=8, majf=0, minf=0 00:21:13.990 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:13.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.990 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.990 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:13.990 00:21:13.990 Run status group 0 (all jobs): 00:21:13.990 READ: bw=84.6MiB/s (88.7MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=847MiB (888MB), run=10009-10011msec 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:13.990 ************************************ 00:21:13.990 END TEST fio_dif_digest 00:21:13.990 ************************************ 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.990 00:21:13.990 real 0m10.990s 00:21:13.990 user 0m28.401s 00:21:13.990 sys 0m2.299s 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.990 16:35:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:13.990 16:35:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:13.990 16:35:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.990 rmmod nvme_tcp 00:21:13.990 rmmod nvme_fabrics 00:21:13.990 rmmod nvme_keyring 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83070 ']' 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83070 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83070 ']' 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83070 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83070 00:21:13.990 killing process with pid 83070 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83070' 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83070 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83070 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:13.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:13.990 Waiting for block devices as requested 00:21:13.990 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:13.990 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.990 16:35:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:13.990 16:35:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.990 16:35:59 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:13.990 00:21:13.990 real 1m3.246s 00:21:13.990 user 4m11.424s 00:21:13.990 sys 0m20.955s 00:21:13.990 ************************************ 00:21:13.990 END TEST nvmf_dif 00:21:13.990 ************************************ 00:21:13.990 16:35:59 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.990 16:35:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:13.990 16:35:59 -- common/autotest_common.sh@1142 -- # return 0 00:21:13.990 16:35:59 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:13.990 16:35:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:13.990 16:35:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.990 16:35:59 -- common/autotest_common.sh@10 -- # set +x 00:21:13.990 ************************************ 00:21:13.990 START TEST nvmf_abort_qd_sizes 00:21:13.990 ************************************ 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:13.990 * Looking for test storage... 00:21:13.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:13.990 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:13.991 Cannot find device "nvmf_tgt_br" 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.991 Cannot find device "nvmf_tgt_br2" 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:13.991 Cannot find device "nvmf_tgt_br" 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:13.991 Cannot find device "nvmf_tgt_br2" 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:13.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:13.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:13.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:21:13.991 00:21:13.991 --- 10.0.0.2 ping statistics --- 00:21:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.991 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:13.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:13.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:21:13.991 00:21:13.991 --- 10.0.0.3 ping statistics --- 00:21:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.991 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:13.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:13.991 00:21:13.991 --- 10.0.0.1 ping statistics --- 00:21:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.991 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:13.991 16:35:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:14.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:14.815 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:14.815 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84439 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84439 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84439 ']' 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.074 16:36:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:15.074 [2024-07-15 16:36:00.446044] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:15.075 [2024-07-15 16:36:00.446130] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.075 [2024-07-15 16:36:00.583362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.333 [2024-07-15 16:36:00.716190] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.333 [2024-07-15 16:36:00.716544] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.333 [2024-07-15 16:36:00.716746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.333 [2024-07-15 16:36:00.716956] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.333 [2024-07-15 16:36:00.717140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.333 [2024-07-15 16:36:00.717376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.333 [2024-07-15 16:36:00.717491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.333 [2024-07-15 16:36:00.717545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.333 [2024-07-15 16:36:00.717548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.333 [2024-07-15 16:36:00.770968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:15.898 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.898 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:15.898 16:36:01 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.898 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.898 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.157 16:36:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:16.157 ************************************ 00:21:16.157 START TEST spdk_target_abort 00:21:16.157 ************************************ 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.157 spdk_targetn1 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.157 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 [2024-07-15 16:36:01.572341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 [2024-07-15 16:36:01.600502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:16.158 16:36:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:19.452 Initializing NVMe Controllers 00:21:19.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:19.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:19.452 Initialization complete. Launching workers. 00:21:19.452 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11722, failed: 0 00:21:19.452 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1036, failed to submit 10686 00:21:19.452 success 772, unsuccess 264, failed 0 00:21:19.452 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:19.452 16:36:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.735 Initializing NVMe Controllers 00:21:22.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:22.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:22.735 Initialization complete. Launching workers. 00:21:22.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8880, failed: 0 00:21:22.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1153, failed to submit 7727 00:21:22.735 success 381, unsuccess 772, failed 0 00:21:22.735 16:36:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:22.735 16:36:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:26.020 Initializing NVMe Controllers 00:21:26.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:26.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:26.020 Initialization complete. Launching workers. 00:21:26.020 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31104, failed: 0 00:21:26.020 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2246, failed to submit 28858 00:21:26.020 success 443, unsuccess 1803, failed 0 00:21:26.020 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:26.020 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.020 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.020 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.020 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:26.020 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.020 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84439 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84439 ']' 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84439 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84439 00:21:26.614 killing process with pid 84439 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84439' 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84439 00:21:26.614 16:36:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84439 00:21:26.908 ************************************ 00:21:26.908 END TEST spdk_target_abort 00:21:26.908 ************************************ 00:21:26.908 00:21:26.908 real 0m10.783s 00:21:26.908 user 0m43.067s 00:21:26.908 sys 0m2.088s 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.908 16:36:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:26.908 16:36:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:26.908 16:36:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:26.908 16:36:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.908 16:36:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:26.908 ************************************ 00:21:26.908 START TEST kernel_target_abort 00:21:26.908 ************************************ 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:26.908 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:27.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.173 Waiting for block devices as requested 00:21:27.173 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:27.431 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:27.431 No valid GPT data, bailing 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:27.431 No valid GPT data, bailing 00:21:27.431 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:27.690 16:36:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:27.690 No valid GPT data, bailing 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:27.690 No valid GPT data, bailing 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc --hostid=6219369d-37e8-4ec9-9c79-8e30851e9efc -a 10.0.0.1 -t tcp -s 4420 00:21:27.690 00:21:27.690 Discovery Log Number of Records 2, Generation counter 2 00:21:27.690 =====Discovery Log Entry 0====== 00:21:27.690 trtype: tcp 00:21:27.690 adrfam: ipv4 00:21:27.690 subtype: current discovery subsystem 00:21:27.690 treq: not specified, sq flow control disable supported 00:21:27.690 portid: 1 00:21:27.690 trsvcid: 4420 00:21:27.690 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:27.690 traddr: 10.0.0.1 00:21:27.690 eflags: none 00:21:27.690 sectype: none 00:21:27.690 =====Discovery Log Entry 1====== 00:21:27.690 trtype: tcp 00:21:27.690 adrfam: ipv4 00:21:27.690 subtype: nvme subsystem 00:21:27.690 treq: not specified, sq flow control disable supported 00:21:27.690 portid: 1 00:21:27.690 trsvcid: 4420 00:21:27.690 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:27.690 traddr: 10.0.0.1 00:21:27.690 eflags: none 00:21:27.690 sectype: none 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:27.690 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:27.691 16:36:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:30.973 Initializing NVMe Controllers 00:21:30.973 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:30.973 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:30.973 Initialization complete. Launching workers. 00:21:30.973 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35364, failed: 0 00:21:30.973 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35364, failed to submit 0 00:21:30.973 success 0, unsuccess 35364, failed 0 00:21:30.973 16:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:30.973 16:36:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:34.255 Initializing NVMe Controllers 00:21:34.255 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:34.255 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:34.255 Initialization complete. Launching workers. 00:21:34.255 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73083, failed: 0 00:21:34.255 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31944, failed to submit 41139 00:21:34.255 success 0, unsuccess 31944, failed 0 00:21:34.255 16:36:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:34.255 16:36:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:37.541 Initializing NVMe Controllers 00:21:37.541 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:37.541 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:37.541 Initialization complete. Launching workers. 00:21:37.541 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85810, failed: 0 00:21:37.541 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21442, failed to submit 64368 00:21:37.541 success 0, unsuccess 21442, failed 0 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:37.541 16:36:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:38.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.084 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:40.085 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:40.085 00:21:40.085 real 0m13.187s 00:21:40.085 user 0m6.352s 00:21:40.085 sys 0m4.155s 00:21:40.085 16:36:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.085 16:36:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.085 ************************************ 00:21:40.085 END TEST kernel_target_abort 00:21:40.085 ************************************ 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.085 rmmod nvme_tcp 00:21:40.085 rmmod nvme_fabrics 00:21:40.085 rmmod nvme_keyring 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84439 ']' 00:21:40.085 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84439 00:21:40.343 16:36:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84439 ']' 00:21:40.343 16:36:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84439 00:21:40.343 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84439) - No such process 00:21:40.343 16:36:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84439 is not found' 00:21:40.343 Process with pid 84439 is not found 00:21:40.343 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:40.343 16:36:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:40.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.601 Waiting for block devices as requested 00:21:40.601 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:40.601 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:40.860 00:21:40.860 real 0m27.127s 00:21:40.860 user 0m50.540s 00:21:40.860 sys 0m7.508s 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.860 ************************************ 00:21:40.860 END TEST nvmf_abort_qd_sizes 00:21:40.860 ************************************ 00:21:40.860 16:36:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:40.860 16:36:26 -- common/autotest_common.sh@1142 -- # return 0 00:21:40.860 16:36:26 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:40.860 16:36:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.860 16:36:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.860 16:36:26 -- common/autotest_common.sh@10 -- # set +x 00:21:40.860 ************************************ 00:21:40.860 START TEST keyring_file 00:21:40.860 ************************************ 00:21:40.860 16:36:26 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:40.860 * Looking for test storage... 00:21:40.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:40.860 16:36:26 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:40.860 16:36:26 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.860 16:36:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:40.860 16:36:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.860 16:36:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.860 16:36:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.860 16:36:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.860 16:36:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.861 16:36:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.861 16:36:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.861 16:36:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.861 16:36:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.861 16:36:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.861 16:36:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.861 16:36:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:40.861 16:36:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:40.861 16:36:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:40.861 16:36:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:40.861 16:36:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:40.861 16:36:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:40.861 16:36:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:40.861 16:36:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JukkrO04Bx 00:21:40.861 16:36:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:40.861 16:36:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JukkrO04Bx 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JukkrO04Bx 00:21:41.120 16:36:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.JukkrO04Bx 00:21:41.120 16:36:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t6ZaJXbetb 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:41.120 16:36:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:41.120 16:36:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:41.120 16:36:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:41.120 16:36:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:41.120 16:36:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:41.120 16:36:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t6ZaJXbetb 00:21:41.120 16:36:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t6ZaJXbetb 00:21:41.120 16:36:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.t6ZaJXbetb 00:21:41.120 16:36:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=85300 00:21:41.120 16:36:26 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:41.120 16:36:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85300 00:21:41.120 16:36:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85300 ']' 00:21:41.120 16:36:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.120 16:36:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.120 16:36:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.120 16:36:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.120 16:36:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:41.120 [2024-07-15 16:36:26.551817] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:41.120 [2024-07-15 16:36:26.551924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85300 ] 00:21:41.379 [2024-07-15 16:36:26.691011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.379 [2024-07-15 16:36:26.817289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.379 [2024-07-15 16:36:26.875495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:42.314 16:36:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:42.314 [2024-07-15 16:36:27.620976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.314 null0 00:21:42.314 [2024-07-15 16:36:27.652951] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.314 [2024-07-15 16:36:27.653192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:42.314 [2024-07-15 16:36:27.660906] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.314 16:36:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:42.314 [2024-07-15 16:36:27.672952] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:42.314 request: 00:21:42.314 { 00:21:42.314 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.314 "secure_channel": false, 00:21:42.314 "listen_address": { 00:21:42.314 "trtype": "tcp", 00:21:42.314 "traddr": "127.0.0.1", 00:21:42.314 "trsvcid": "4420" 00:21:42.314 }, 00:21:42.314 "method": "nvmf_subsystem_add_listener", 00:21:42.314 "req_id": 1 00:21:42.314 } 00:21:42.314 Got JSON-RPC error response 00:21:42.314 response: 00:21:42.314 { 00:21:42.314 "code": -32602, 00:21:42.314 "message": "Invalid parameters" 00:21:42.314 } 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:42.314 16:36:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=85317 00:21:42.314 16:36:27 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:42.314 16:36:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85317 /var/tmp/bperf.sock 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85317 ']' 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.314 16:36:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:42.314 [2024-07-15 16:36:27.725266] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:42.314 [2024-07-15 16:36:27.725336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85317 ] 00:21:42.314 [2024-07-15 16:36:27.857703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.571 [2024-07-15 16:36:27.969230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.571 [2024-07-15 16:36:28.022862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:43.545 16:36:28 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.545 16:36:28 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:43.545 16:36:28 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:43.545 16:36:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:43.545 16:36:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t6ZaJXbetb 00:21:43.545 16:36:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t6ZaJXbetb 00:21:43.803 16:36:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:43.803 16:36:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:43.803 16:36:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:43.803 16:36:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.803 16:36:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:44.111 16:36:29 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.JukkrO04Bx == \/\t\m\p\/\t\m\p\.\J\u\k\k\r\O\0\4\B\x ]] 00:21:44.111 16:36:29 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:44.111 16:36:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:44.111 16:36:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:44.111 16:36:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.111 16:36:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.369 16:36:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.t6ZaJXbetb == \/\t\m\p\/\t\m\p\.\t\6\Z\a\J\X\b\e\t\b ]] 00:21:44.369 16:36:29 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:44.369 16:36:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:44.369 16:36:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.369 16:36:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.369 16:36:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:44.369 16:36:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.627 16:36:30 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:44.627 16:36:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:44.627 16:36:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.627 16:36:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:44.627 16:36:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.627 16:36:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.627 16:36:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:44.884 16:36:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:44.884 16:36:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:44.884 16:36:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:45.142 [2024-07-15 16:36:30.592041] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.142 nvme0n1 00:21:45.142 16:36:30 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:45.142 16:36:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:45.142 16:36:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:45.142 16:36:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.142 16:36:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.142 16:36:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:45.707 16:36:30 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:45.707 16:36:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:45.707 16:36:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:45.707 16:36:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:45.707 16:36:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.707 16:36:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.707 16:36:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:45.966 16:36:31 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:45.966 16:36:31 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:45.966 Running I/O for 1 seconds... 00:21:46.898 00:21:46.898 Latency(us) 00:21:46.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.898 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:46.898 nvme0n1 : 1.01 11039.94 43.12 0.00 0.00 11555.18 5749.29 23116.33 00:21:46.898 =================================================================================================================== 00:21:46.898 Total : 11039.94 43.12 0.00 0.00 11555.18 5749.29 23116.33 00:21:46.898 0 00:21:46.898 16:36:32 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:46.898 16:36:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:47.156 16:36:32 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:47.156 16:36:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:47.156 16:36:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.156 16:36:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.156 16:36:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.156 16:36:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.722 16:36:32 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:47.722 16:36:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:47.722 16:36:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:47.722 16:36:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.722 16:36:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.722 16:36:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.722 16:36:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:47.722 16:36:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:47.722 16:36:33 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.722 16:36:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:47.722 16:36:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.722 16:36:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:47.722 16:36:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.722 16:36:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:47.722 16:36:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.722 16:36:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.722 16:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.980 [2024-07-15 16:36:33.448218] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:47.980 [2024-07-15 16:36:33.448686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58a4f0 (107): Transport endpoint is not connected 00:21:47.980 [2024-07-15 16:36:33.449676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x58a4f0 (9): Bad file descriptor 00:21:47.980 [2024-07-15 16:36:33.450674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:47.980 [2024-07-15 16:36:33.450694] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:47.980 [2024-07-15 16:36:33.450704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:47.980 request: 00:21:47.980 { 00:21:47.980 "name": "nvme0", 00:21:47.980 "trtype": "tcp", 00:21:47.980 "traddr": "127.0.0.1", 00:21:47.980 "adrfam": "ipv4", 00:21:47.980 "trsvcid": "4420", 00:21:47.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:47.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:47.980 "prchk_reftag": false, 00:21:47.980 "prchk_guard": false, 00:21:47.980 "hdgst": false, 00:21:47.980 "ddgst": false, 00:21:47.980 "psk": "key1", 00:21:47.980 "method": "bdev_nvme_attach_controller", 00:21:47.980 "req_id": 1 00:21:47.980 } 00:21:47.980 Got JSON-RPC error response 00:21:47.980 response: 00:21:47.980 { 00:21:47.980 "code": -5, 00:21:47.980 "message": "Input/output error" 00:21:47.980 } 00:21:47.980 16:36:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:47.980 16:36:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:47.980 16:36:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:47.980 16:36:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:47.980 16:36:33 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:47.980 16:36:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:47.980 16:36:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.980 16:36:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.980 16:36:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.980 16:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.238 16:36:33 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:48.238 16:36:33 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:48.238 16:36:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:48.238 16:36:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:48.238 16:36:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:48.238 16:36:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.238 16:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.495 16:36:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:48.495 16:36:33 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:48.495 16:36:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:48.752 16:36:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:48.752 16:36:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:49.022 16:36:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:49.022 16:36:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.022 16:36:34 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:49.591 16:36:34 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:49.591 16:36:34 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.JukkrO04Bx 00:21:49.591 16:36:34 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:49.591 16:36:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:49.591 16:36:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:49.591 16:36:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:49.591 16:36:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.591 16:36:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:49.591 16:36:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.591 16:36:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:49.591 16:36:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:49.591 [2024-07-15 16:36:35.057176] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JukkrO04Bx': 0100660 00:21:49.591 [2024-07-15 16:36:35.057260] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:49.591 request: 00:21:49.591 { 00:21:49.591 "name": "key0", 00:21:49.591 "path": "/tmp/tmp.JukkrO04Bx", 00:21:49.591 "method": "keyring_file_add_key", 00:21:49.591 "req_id": 1 00:21:49.591 } 00:21:49.591 Got JSON-RPC error response 00:21:49.591 response: 00:21:49.591 { 00:21:49.591 "code": -1, 00:21:49.591 "message": "Operation not permitted" 00:21:49.591 } 00:21:49.591 16:36:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:49.591 16:36:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:49.591 16:36:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:49.591 16:36:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:49.591 16:36:35 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.JukkrO04Bx 00:21:49.591 16:36:35 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:49.591 16:36:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JukkrO04Bx 00:21:49.849 16:36:35 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.JukkrO04Bx 00:21:49.849 16:36:35 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:49.849 16:36:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:49.849 16:36:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.849 16:36:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:49.849 16:36:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.849 16:36:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:50.107 16:36:35 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:50.107 16:36:35 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:50.107 16:36:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:50.107 16:36:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:50.107 16:36:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:50.107 16:36:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.107 16:36:35 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:50.107 16:36:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.107 16:36:35 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:50.107 16:36:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:50.364 [2024-07-15 16:36:35.817359] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.JukkrO04Bx': No such file or directory 00:21:50.364 [2024-07-15 16:36:35.817445] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:50.365 [2024-07-15 16:36:35.817475] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:50.365 [2024-07-15 16:36:35.817484] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:50.365 [2024-07-15 16:36:35.817495] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:50.365 request: 00:21:50.365 { 00:21:50.365 "name": "nvme0", 00:21:50.365 "trtype": "tcp", 00:21:50.365 "traddr": "127.0.0.1", 00:21:50.365 "adrfam": "ipv4", 00:21:50.365 "trsvcid": "4420", 00:21:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:50.365 "prchk_reftag": false, 00:21:50.365 "prchk_guard": false, 00:21:50.365 "hdgst": false, 00:21:50.365 "ddgst": false, 00:21:50.365 "psk": "key0", 00:21:50.365 "method": "bdev_nvme_attach_controller", 00:21:50.365 "req_id": 1 00:21:50.365 } 00:21:50.365 Got JSON-RPC error response 00:21:50.365 response: 00:21:50.365 { 00:21:50.365 "code": -19, 00:21:50.365 "message": "No such device" 00:21:50.365 } 00:21:50.365 16:36:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:50.365 16:36:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.365 16:36:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.365 16:36:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.365 16:36:35 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:50.365 16:36:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:50.622 16:36:36 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fqVpWYM7iO 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:50.622 16:36:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:50.622 16:36:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.622 16:36:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.622 16:36:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:50.622 16:36:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:50.622 16:36:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fqVpWYM7iO 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fqVpWYM7iO 00:21:50.622 16:36:36 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.fqVpWYM7iO 00:21:50.622 16:36:36 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fqVpWYM7iO 00:21:50.622 16:36:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fqVpWYM7iO 00:21:50.880 16:36:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:50.880 16:36:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:51.138 nvme0n1 00:21:51.138 16:36:36 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:51.138 16:36:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:51.138 16:36:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.138 16:36:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.138 16:36:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.138 16:36:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.397 16:36:36 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:51.397 16:36:36 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:51.397 16:36:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:51.963 16:36:37 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:51.963 16:36:37 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:51.963 16:36:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.963 16:36:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.963 16:36:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.221 16:36:37 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:52.221 16:36:37 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:52.221 16:36:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:52.221 16:36:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:52.221 16:36:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:52.221 16:36:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:52.221 16:36:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.521 16:36:37 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:52.521 16:36:37 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:52.521 16:36:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:52.779 16:36:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:52.779 16:36:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:52.779 16:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.038 16:36:38 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:53.038 16:36:38 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fqVpWYM7iO 00:21:53.038 16:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fqVpWYM7iO 00:21:53.298 16:36:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t6ZaJXbetb 00:21:53.298 16:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t6ZaJXbetb 00:21:53.557 16:36:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.557 16:36:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.815 nvme0n1 00:21:53.815 16:36:39 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:53.815 16:36:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:54.073 16:36:39 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:54.073 "subsystems": [ 00:21:54.073 { 00:21:54.073 "subsystem": "keyring", 00:21:54.073 "config": [ 00:21:54.073 { 00:21:54.073 "method": "keyring_file_add_key", 00:21:54.073 "params": { 00:21:54.073 "name": "key0", 00:21:54.073 "path": "/tmp/tmp.fqVpWYM7iO" 00:21:54.073 } 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "method": "keyring_file_add_key", 00:21:54.073 "params": { 00:21:54.073 "name": "key1", 00:21:54.073 "path": "/tmp/tmp.t6ZaJXbetb" 00:21:54.073 } 00:21:54.073 } 00:21:54.073 ] 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "subsystem": "iobuf", 00:21:54.073 "config": [ 00:21:54.073 { 00:21:54.073 "method": "iobuf_set_options", 00:21:54.073 "params": { 00:21:54.073 "small_pool_count": 8192, 00:21:54.073 "large_pool_count": 1024, 00:21:54.073 "small_bufsize": 8192, 00:21:54.073 "large_bufsize": 135168 00:21:54.073 } 00:21:54.073 } 00:21:54.073 ] 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "subsystem": "sock", 00:21:54.073 "config": [ 00:21:54.073 { 00:21:54.073 "method": "sock_set_default_impl", 00:21:54.073 "params": { 00:21:54.073 "impl_name": "uring" 00:21:54.073 } 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "method": "sock_impl_set_options", 00:21:54.073 "params": { 00:21:54.073 "impl_name": "ssl", 00:21:54.073 "recv_buf_size": 4096, 00:21:54.073 "send_buf_size": 4096, 00:21:54.073 "enable_recv_pipe": true, 00:21:54.073 "enable_quickack": false, 00:21:54.073 "enable_placement_id": 0, 00:21:54.073 "enable_zerocopy_send_server": true, 00:21:54.073 "enable_zerocopy_send_client": false, 00:21:54.073 "zerocopy_threshold": 0, 00:21:54.073 "tls_version": 0, 00:21:54.073 "enable_ktls": false 00:21:54.073 } 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "method": "sock_impl_set_options", 00:21:54.073 "params": { 00:21:54.073 "impl_name": "posix", 00:21:54.073 "recv_buf_size": 2097152, 00:21:54.073 "send_buf_size": 2097152, 00:21:54.073 "enable_recv_pipe": true, 00:21:54.073 "enable_quickack": false, 00:21:54.073 "enable_placement_id": 0, 00:21:54.073 "enable_zerocopy_send_server": true, 00:21:54.073 "enable_zerocopy_send_client": false, 00:21:54.073 "zerocopy_threshold": 0, 00:21:54.073 "tls_version": 0, 00:21:54.073 "enable_ktls": false 00:21:54.073 } 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "method": "sock_impl_set_options", 00:21:54.073 "params": { 00:21:54.073 "impl_name": "uring", 00:21:54.073 "recv_buf_size": 2097152, 00:21:54.073 "send_buf_size": 2097152, 00:21:54.073 "enable_recv_pipe": true, 00:21:54.073 "enable_quickack": false, 00:21:54.073 "enable_placement_id": 0, 00:21:54.073 "enable_zerocopy_send_server": false, 00:21:54.073 "enable_zerocopy_send_client": false, 00:21:54.073 "zerocopy_threshold": 0, 00:21:54.073 "tls_version": 0, 00:21:54.073 "enable_ktls": false 00:21:54.073 } 00:21:54.073 } 00:21:54.073 ] 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "subsystem": "vmd", 00:21:54.073 "config": [] 00:21:54.073 }, 00:21:54.073 { 00:21:54.073 "subsystem": "accel", 00:21:54.073 "config": [ 00:21:54.073 { 00:21:54.073 "method": "accel_set_options", 00:21:54.073 "params": { 00:21:54.073 "small_cache_size": 128, 00:21:54.073 "large_cache_size": 16, 00:21:54.073 "task_count": 2048, 00:21:54.073 "sequence_count": 2048, 00:21:54.073 "buf_count": 2048 00:21:54.073 } 00:21:54.074 } 00:21:54.074 ] 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "subsystem": "bdev", 00:21:54.074 "config": [ 00:21:54.074 { 00:21:54.074 "method": "bdev_set_options", 00:21:54.074 "params": { 00:21:54.074 "bdev_io_pool_size": 65535, 00:21:54.074 "bdev_io_cache_size": 256, 00:21:54.074 "bdev_auto_examine": true, 00:21:54.074 "iobuf_small_cache_size": 128, 00:21:54.074 "iobuf_large_cache_size": 16 00:21:54.074 } 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "method": "bdev_raid_set_options", 00:21:54.074 "params": { 00:21:54.074 "process_window_size_kb": 1024 00:21:54.074 } 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "method": "bdev_iscsi_set_options", 00:21:54.074 "params": { 00:21:54.074 "timeout_sec": 30 00:21:54.074 } 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "method": "bdev_nvme_set_options", 00:21:54.074 "params": { 00:21:54.074 "action_on_timeout": "none", 00:21:54.074 "timeout_us": 0, 00:21:54.074 "timeout_admin_us": 0, 00:21:54.074 "keep_alive_timeout_ms": 10000, 00:21:54.074 "arbitration_burst": 0, 00:21:54.074 "low_priority_weight": 0, 00:21:54.074 "medium_priority_weight": 0, 00:21:54.074 "high_priority_weight": 0, 00:21:54.074 "nvme_adminq_poll_period_us": 10000, 00:21:54.074 "nvme_ioq_poll_period_us": 0, 00:21:54.074 "io_queue_requests": 512, 00:21:54.074 "delay_cmd_submit": true, 00:21:54.074 "transport_retry_count": 4, 00:21:54.074 "bdev_retry_count": 3, 00:21:54.074 "transport_ack_timeout": 0, 00:21:54.074 "ctrlr_loss_timeout_sec": 0, 00:21:54.074 "reconnect_delay_sec": 0, 00:21:54.074 "fast_io_fail_timeout_sec": 0, 00:21:54.074 "disable_auto_failback": false, 00:21:54.074 "generate_uuids": false, 00:21:54.074 "transport_tos": 0, 00:21:54.074 "nvme_error_stat": false, 00:21:54.074 "rdma_srq_size": 0, 00:21:54.074 "io_path_stat": false, 00:21:54.074 "allow_accel_sequence": false, 00:21:54.074 "rdma_max_cq_size": 0, 00:21:54.074 "rdma_cm_event_timeout_ms": 0, 00:21:54.074 "dhchap_digests": [ 00:21:54.074 "sha256", 00:21:54.074 "sha384", 00:21:54.074 "sha512" 00:21:54.074 ], 00:21:54.074 "dhchap_dhgroups": [ 00:21:54.074 "null", 00:21:54.074 "ffdhe2048", 00:21:54.074 "ffdhe3072", 00:21:54.074 "ffdhe4096", 00:21:54.074 "ffdhe6144", 00:21:54.074 "ffdhe8192" 00:21:54.074 ] 00:21:54.074 } 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "method": "bdev_nvme_attach_controller", 00:21:54.074 "params": { 00:21:54.074 "name": "nvme0", 00:21:54.074 "trtype": "TCP", 00:21:54.074 "adrfam": "IPv4", 00:21:54.074 "traddr": "127.0.0.1", 00:21:54.074 "trsvcid": "4420", 00:21:54.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:54.074 "prchk_reftag": false, 00:21:54.074 "prchk_guard": false, 00:21:54.074 "ctrlr_loss_timeout_sec": 0, 00:21:54.074 "reconnect_delay_sec": 0, 00:21:54.074 "fast_io_fail_timeout_sec": 0, 00:21:54.074 "psk": "key0", 00:21:54.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:54.074 "hdgst": false, 00:21:54.074 "ddgst": false 00:21:54.074 } 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "method": "bdev_nvme_set_hotplug", 00:21:54.074 "params": { 00:21:54.074 "period_us": 100000, 00:21:54.074 "enable": false 00:21:54.074 } 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "method": "bdev_wait_for_examine" 00:21:54.074 } 00:21:54.074 ] 00:21:54.074 }, 00:21:54.074 { 00:21:54.074 "subsystem": "nbd", 00:21:54.074 "config": [] 00:21:54.074 } 00:21:54.074 ] 00:21:54.074 }' 00:21:54.074 16:36:39 keyring_file -- keyring/file.sh@114 -- # killprocess 85317 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85317 ']' 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85317 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85317 00:21:54.074 killing process with pid 85317 00:21:54.074 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.074 00:21:54.074 Latency(us) 00:21:54.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.074 =================================================================================================================== 00:21:54.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85317' 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@967 -- # kill 85317 00:21:54.074 16:36:39 keyring_file -- common/autotest_common.sh@972 -- # wait 85317 00:21:54.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:54.640 16:36:39 keyring_file -- keyring/file.sh@117 -- # bperfpid=85574 00:21:54.640 16:36:39 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85574 /var/tmp/bperf.sock 00:21:54.640 16:36:39 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85574 ']' 00:21:54.640 16:36:39 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:54.640 16:36:39 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.640 16:36:39 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:54.640 16:36:39 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.640 16:36:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:54.640 16:36:39 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:54.640 16:36:39 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:54.640 "subsystems": [ 00:21:54.640 { 00:21:54.640 "subsystem": "keyring", 00:21:54.640 "config": [ 00:21:54.640 { 00:21:54.640 "method": "keyring_file_add_key", 00:21:54.640 "params": { 00:21:54.640 "name": "key0", 00:21:54.640 "path": "/tmp/tmp.fqVpWYM7iO" 00:21:54.640 } 00:21:54.640 }, 00:21:54.640 { 00:21:54.640 "method": "keyring_file_add_key", 00:21:54.640 "params": { 00:21:54.640 "name": "key1", 00:21:54.640 "path": "/tmp/tmp.t6ZaJXbetb" 00:21:54.640 } 00:21:54.640 } 00:21:54.640 ] 00:21:54.640 }, 00:21:54.641 { 00:21:54.641 "subsystem": "iobuf", 00:21:54.641 "config": [ 00:21:54.641 { 00:21:54.641 "method": "iobuf_set_options", 00:21:54.641 "params": { 00:21:54.641 "small_pool_count": 8192, 00:21:54.641 "large_pool_count": 1024, 00:21:54.641 "small_bufsize": 8192, 00:21:54.641 "large_bufsize": 135168 00:21:54.641 } 00:21:54.641 } 00:21:54.641 ] 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "subsystem": "sock", 00:21:54.641 "config": [ 00:21:54.641 { 00:21:54.641 "method": "sock_set_default_impl", 00:21:54.641 "params": { 00:21:54.641 "impl_name": "uring" 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "sock_impl_set_options", 00:21:54.641 "params": { 00:21:54.641 "impl_name": "ssl", 00:21:54.641 "recv_buf_size": 4096, 00:21:54.641 "send_buf_size": 4096, 00:21:54.641 "enable_recv_pipe": true, 00:21:54.641 "enable_quickack": false, 00:21:54.641 "enable_placement_id": 0, 00:21:54.641 "enable_zerocopy_send_server": true, 00:21:54.641 "enable_zerocopy_send_client": false, 00:21:54.641 "zerocopy_threshold": 0, 00:21:54.641 "tls_version": 0, 00:21:54.641 "enable_ktls": false 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "sock_impl_set_options", 00:21:54.641 "params": { 00:21:54.641 "impl_name": "posix", 00:21:54.641 "recv_buf_size": 2097152, 00:21:54.641 "send_buf_size": 2097152, 00:21:54.641 "enable_recv_pipe": true, 00:21:54.641 "enable_quickack": false, 00:21:54.641 "enable_placement_id": 0, 00:21:54.641 "enable_zerocopy_send_server": true, 00:21:54.641 "enable_zerocopy_send_client": false, 00:21:54.641 "zerocopy_threshold": 0, 00:21:54.641 "tls_version": 0, 00:21:54.641 "enable_ktls": false 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "sock_impl_set_options", 00:21:54.641 "params": { 00:21:54.641 "impl_name": "uring", 00:21:54.641 "recv_buf_size": 2097152, 00:21:54.641 "send_buf_size": 2097152, 00:21:54.641 "enable_recv_pipe": true, 00:21:54.641 "enable_quickack": false, 00:21:54.641 "enable_placement_id": 0, 00:21:54.641 "enable_zerocopy_send_server": false, 00:21:54.641 "enable_zerocopy_send_client": false, 00:21:54.641 "zerocopy_threshold": 0, 00:21:54.641 "tls_version": 0, 00:21:54.641 "enable_ktls": false 00:21:54.641 } 00:21:54.641 } 00:21:54.641 ] 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "subsystem": "vmd", 00:21:54.641 "config": [] 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "subsystem": "accel", 00:21:54.641 "config": [ 00:21:54.641 { 00:21:54.641 "method": "accel_set_options", 00:21:54.641 "params": { 00:21:54.641 "small_cache_size": 128, 00:21:54.641 "large_cache_size": 16, 00:21:54.641 "task_count": 2048, 00:21:54.641 "sequence_count": 2048, 00:21:54.641 "buf_count": 2048 00:21:54.641 } 00:21:54.641 } 00:21:54.641 ] 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "subsystem": "bdev", 00:21:54.641 "config": [ 00:21:54.641 { 00:21:54.641 "method": "bdev_set_options", 00:21:54.641 "params": { 00:21:54.641 "bdev_io_pool_size": 65535, 00:21:54.641 "bdev_io_cache_size": 256, 00:21:54.641 "bdev_auto_examine": true, 00:21:54.641 "iobuf_small_cache_size": 128, 00:21:54.641 "iobuf_large_cache_size": 16 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "bdev_raid_set_options", 00:21:54.641 "params": { 00:21:54.641 "process_window_size_kb": 1024 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "bdev_iscsi_set_options", 00:21:54.641 "params": { 00:21:54.641 "timeout_sec": 30 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "bdev_nvme_set_options", 00:21:54.641 "params": { 00:21:54.641 "action_on_timeout": "none", 00:21:54.641 "timeout_us": 0, 00:21:54.641 "timeout_admin_us": 0, 00:21:54.641 "keep_alive_timeout_ms": 10000, 00:21:54.641 "arbitration_burst": 0, 00:21:54.641 "low_priority_weight": 0, 00:21:54.641 "medium_priority_weight": 0, 00:21:54.641 "high_priority_weight": 0, 00:21:54.641 "nvme_adminq_poll_period_us": 10000, 00:21:54.641 "nvme_ioq_poll_period_us": 0, 00:21:54.641 "io_queue_requests": 512, 00:21:54.641 "delay_cmd_submit": true, 00:21:54.641 "transport_retry_count": 4, 00:21:54.641 "bdev_retry_count": 3, 00:21:54.641 "transport_ack_timeout": 0, 00:21:54.641 "ctrlr_loss_timeout_sec": 0, 00:21:54.641 "reconnect_delay_sec": 0, 00:21:54.641 "fast_io_fail_timeout_sec": 0, 00:21:54.641 "disable_auto_failback": false, 00:21:54.641 "generate_uuids": false, 00:21:54.641 "transport_tos": 0, 00:21:54.641 "nvme_error_stat": false, 00:21:54.641 "rdma_srq_size": 0, 00:21:54.641 "io_path_stat": false, 00:21:54.641 "allow_accel_sequence": false, 00:21:54.641 "rdma_max_cq_size": 0, 00:21:54.641 "rdma_cm_event_timeout_ms": 0, 00:21:54.641 "dhchap_digests": [ 00:21:54.641 "sha256", 00:21:54.641 "sha384", 00:21:54.641 "sha512" 00:21:54.641 ], 00:21:54.641 "dhchap_dhgroups": [ 00:21:54.641 "null", 00:21:54.641 "ffdhe2048", 00:21:54.641 "ffdhe3072", 00:21:54.641 "ffdhe4096", 00:21:54.641 "ffdhe6144", 00:21:54.641 "ffdhe8192" 00:21:54.641 ] 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "bdev_nvme_attach_controller", 00:21:54.641 "params": { 00:21:54.641 "name": "nvme0", 00:21:54.641 "trtype": "TCP", 00:21:54.641 "adrfam": "IPv4", 00:21:54.641 "traddr": "127.0.0.1", 00:21:54.641 "trsvcid": "4420", 00:21:54.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:54.641 "prchk_reftag": false, 00:21:54.641 "prchk_guard": false, 00:21:54.641 "ctrlr_loss_timeout_sec": 0, 00:21:54.641 "reconnect_delay_sec": 0, 00:21:54.641 "fast_io_fail_timeout_sec": 0, 00:21:54.641 "psk": "key0", 00:21:54.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:54.641 "hdgst": false, 00:21:54.641 "ddgst": false 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "bdev_nvme_set_hotplug", 00:21:54.641 "params": { 00:21:54.641 "period_us": 100000, 00:21:54.641 "enable": false 00:21:54.641 } 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "method": "bdev_wait_for_examine" 00:21:54.641 } 00:21:54.641 ] 00:21:54.641 }, 00:21:54.641 { 00:21:54.641 "subsystem": "nbd", 00:21:54.641 "config": [] 00:21:54.641 } 00:21:54.641 ] 00:21:54.641 }' 00:21:54.641 [2024-07-15 16:36:39.955689] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:54.641 [2024-07-15 16:36:39.955835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85574 ] 00:21:54.641 [2024-07-15 16:36:40.098784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.899 [2024-07-15 16:36:40.246949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.899 [2024-07-15 16:36:40.401450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:55.158 [2024-07-15 16:36:40.470014] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.417 16:36:40 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.417 16:36:40 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:55.417 16:36:40 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:55.417 16:36:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.417 16:36:40 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:55.675 16:36:41 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:55.675 16:36:41 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:55.675 16:36:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.675 16:36:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:55.675 16:36:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.675 16:36:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.675 16:36:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.933 16:36:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:55.933 16:36:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:55.933 16:36:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.933 16:36:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:55.934 16:36:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.934 16:36:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.934 16:36:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:56.193 16:36:41 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:56.193 16:36:41 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:56.193 16:36:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:56.193 16:36:41 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:56.451 16:36:41 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:56.451 16:36:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:56.451 16:36:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fqVpWYM7iO /tmp/tmp.t6ZaJXbetb 00:21:56.451 16:36:41 keyring_file -- keyring/file.sh@20 -- # killprocess 85574 00:21:56.451 16:36:41 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85574 ']' 00:21:56.451 16:36:41 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85574 00:21:56.451 16:36:41 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:56.710 16:36:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.710 16:36:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85574 00:21:56.710 killing process with pid 85574 00:21:56.710 Received shutdown signal, test time was about 1.000000 seconds 00:21:56.710 00:21:56.710 Latency(us) 00:21:56.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.710 =================================================================================================================== 00:21:56.710 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.710 16:36:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:56.710 16:36:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:56.710 16:36:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85574' 00:21:56.710 16:36:42 keyring_file -- common/autotest_common.sh@967 -- # kill 85574 00:21:56.710 16:36:42 keyring_file -- common/autotest_common.sh@972 -- # wait 85574 00:21:56.968 16:36:42 keyring_file -- keyring/file.sh@21 -- # killprocess 85300 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85300 ']' 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85300 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85300 00:21:56.968 killing process with pid 85300 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85300' 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@967 -- # kill 85300 00:21:56.968 [2024-07-15 16:36:42.358717] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:56.968 16:36:42 keyring_file -- common/autotest_common.sh@972 -- # wait 85300 00:21:57.227 00:21:57.227 real 0m16.487s 00:21:57.227 user 0m40.973s 00:21:57.227 sys 0m3.170s 00:21:57.227 16:36:42 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.227 ************************************ 00:21:57.227 END TEST keyring_file 00:21:57.227 ************************************ 00:21:57.227 16:36:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:57.486 16:36:42 -- common/autotest_common.sh@1142 -- # return 0 00:21:57.486 16:36:42 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:21:57.486 16:36:42 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:57.486 16:36:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:57.486 16:36:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.486 16:36:42 -- common/autotest_common.sh@10 -- # set +x 00:21:57.486 ************************************ 00:21:57.486 START TEST keyring_linux 00:21:57.486 ************************************ 00:21:57.486 16:36:42 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:57.486 * Looking for test storage... 00:21:57.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:57.486 16:36:42 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:57.486 16:36:42 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6219369d-37e8-4ec9-9c79-8e30851e9efc 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=6219369d-37e8-4ec9-9c79-8e30851e9efc 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:57.487 16:36:42 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.487 16:36:42 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.487 16:36:42 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.487 16:36:42 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.487 16:36:42 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.487 16:36:42 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.487 16:36:42 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:57.487 16:36:42 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:57.487 16:36:42 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:57.487 16:36:42 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:57.487 16:36:42 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:57.487 16:36:42 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:57.487 16:36:42 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:57.487 16:36:42 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:57.487 /tmp/:spdk-test:key0 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:57.487 16:36:42 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:57.487 16:36:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:57.487 16:36:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:57.487 16:36:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:57.487 /tmp/:spdk-test:key1 00:21:57.487 16:36:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:57.487 16:36:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85691 00:21:57.487 16:36:43 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:57.487 16:36:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85691 00:21:57.487 16:36:43 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85691 ']' 00:21:57.487 16:36:43 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.487 16:36:43 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.487 16:36:43 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.487 16:36:43 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.487 16:36:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:57.747 [2024-07-15 16:36:43.093335] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:57.747 [2024-07-15 16:36:43.093460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85691 ] 00:21:57.747 [2024-07-15 16:36:43.233343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.005 [2024-07-15 16:36:43.348360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.005 [2024-07-15 16:36:43.404375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:58.603 16:36:44 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.603 16:36:44 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:58.603 16:36:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:58.603 16:36:44 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.603 16:36:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:58.603 [2024-07-15 16:36:44.103999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.603 null0 00:21:58.882 [2024-07-15 16:36:44.135965] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.882 [2024-07-15 16:36:44.136223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:58.882 16:36:44 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.882 16:36:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:58.882 141025740 00:21:58.882 16:36:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:58.882 599217255 00:21:58.882 16:36:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85705 00:21:58.882 16:36:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85705 /var/tmp/bperf.sock 00:21:58.882 16:36:44 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85705 ']' 00:21:58.882 16:36:44 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:58.882 16:36:44 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:58.882 16:36:44 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:58.882 16:36:44 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:58.882 16:36:44 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.882 16:36:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:58.882 [2024-07-15 16:36:44.205472] Starting SPDK v24.09-pre git sha1 bdeef1ed3 / DPDK 24.03.0 initialization... 00:21:58.882 [2024-07-15 16:36:44.205557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85705 ] 00:21:58.882 [2024-07-15 16:36:44.336041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.141 [2024-07-15 16:36:44.468263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.708 16:36:45 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.708 16:36:45 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:59.708 16:36:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:59.708 16:36:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:59.967 16:36:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:59.967 16:36:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:00.224 [2024-07-15 16:36:45.679334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:00.224 16:36:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:00.224 16:36:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:00.482 [2024-07-15 16:36:45.955141] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.482 nvme0n1 00:22:00.739 16:36:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:00.739 16:36:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:00.739 16:36:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:00.739 16:36:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:00.739 16:36:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:00.739 16:36:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:00.739 16:36:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:00.739 16:36:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:00.997 16:36:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:00.997 16:36:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:00.997 16:36:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@25 -- # sn=141025740 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 141025740 == \1\4\1\0\2\5\7\4\0 ]] 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 141025740 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:00.997 16:36:46 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:01.255 Running I/O for 1 seconds... 00:22:02.197 00:22:02.197 Latency(us) 00:22:02.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.197 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:02.197 nvme0n1 : 1.01 10925.73 42.68 0.00 0.00 11639.75 3083.17 13166.78 00:22:02.197 =================================================================================================================== 00:22:02.197 Total : 10925.73 42.68 0.00 0.00 11639.75 3083.17 13166.78 00:22:02.197 0 00:22:02.197 16:36:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:02.197 16:36:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:02.761 16:36:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:02.761 16:36:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:02.761 16:36:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:02.761 16:36:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:02.761 16:36:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.761 16:36:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:03.019 16:36:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:03.019 16:36:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:03.019 16:36:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:03.019 16:36:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:03.019 16:36:48 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:03.019 16:36:48 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:03.019 16:36:48 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:03.019 16:36:48 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.019 16:36:48 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:03.019 16:36:48 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.019 16:36:48 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:03.019 16:36:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:03.278 [2024-07-15 16:36:48.582932] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.278 [2024-07-15 16:36:48.583534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91a460 (107): Transport endpoint is not connected 00:22:03.278 [2024-07-15 16:36:48.584521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91a460 (9): Bad file descriptor 00:22:03.278 [2024-07-15 16:36:48.585517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:03.278 [2024-07-15 16:36:48.585542] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:03.278 [2024-07-15 16:36:48.585553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:03.278 request: 00:22:03.278 { 00:22:03.278 "name": "nvme0", 00:22:03.278 "trtype": "tcp", 00:22:03.278 "traddr": "127.0.0.1", 00:22:03.278 "adrfam": "ipv4", 00:22:03.278 "trsvcid": "4420", 00:22:03.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:03.278 "prchk_reftag": false, 00:22:03.278 "prchk_guard": false, 00:22:03.279 "hdgst": false, 00:22:03.279 "ddgst": false, 00:22:03.279 "psk": ":spdk-test:key1", 00:22:03.279 "method": "bdev_nvme_attach_controller", 00:22:03.279 "req_id": 1 00:22:03.279 } 00:22:03.279 Got JSON-RPC error response 00:22:03.279 response: 00:22:03.279 { 00:22:03.279 "code": -5, 00:22:03.279 "message": "Input/output error" 00:22:03.279 } 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@33 -- # sn=141025740 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 141025740 00:22:03.279 1 links removed 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@33 -- # sn=599217255 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 599217255 00:22:03.279 1 links removed 00:22:03.279 16:36:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85705 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85705 ']' 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85705 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85705 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85705' 00:22:03.279 killing process with pid 85705 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@967 -- # kill 85705 00:22:03.279 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.279 00:22:03.279 Latency(us) 00:22:03.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.279 =================================================================================================================== 00:22:03.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.279 16:36:48 keyring_linux -- common/autotest_common.sh@972 -- # wait 85705 00:22:03.537 16:36:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85691 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85691 ']' 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85691 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85691 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:03.537 killing process with pid 85691 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85691' 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@967 -- # kill 85691 00:22:03.537 16:36:48 keyring_linux -- common/autotest_common.sh@972 -- # wait 85691 00:22:03.795 00:22:03.795 real 0m6.509s 00:22:03.795 user 0m12.642s 00:22:03.795 sys 0m1.596s 00:22:03.795 16:36:49 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:03.795 16:36:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:03.795 ************************************ 00:22:03.795 END TEST keyring_linux 00:22:03.795 ************************************ 00:22:04.054 16:36:49 -- common/autotest_common.sh@1142 -- # return 0 00:22:04.054 16:36:49 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:04.054 16:36:49 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:04.054 16:36:49 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:04.054 16:36:49 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:04.054 16:36:49 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:04.054 16:36:49 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:04.054 16:36:49 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:04.054 16:36:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.054 16:36:49 -- common/autotest_common.sh@10 -- # set +x 00:22:04.054 16:36:49 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:04.054 16:36:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:04.054 16:36:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:04.054 16:36:49 -- common/autotest_common.sh@10 -- # set +x 00:22:05.430 INFO: APP EXITING 00:22:05.430 INFO: killing all VMs 00:22:05.430 INFO: killing vhost app 00:22:05.430 INFO: EXIT DONE 00:22:05.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:06.258 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:06.258 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:06.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:06.826 Cleaning 00:22:06.826 Removing: /var/run/dpdk/spdk0/config 00:22:06.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:06.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:06.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:06.826 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:06.826 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:06.826 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:06.826 Removing: /var/run/dpdk/spdk1/config 00:22:06.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:06.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:06.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:06.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:06.826 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:06.826 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:06.826 Removing: /var/run/dpdk/spdk2/config 00:22:06.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:06.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:06.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:06.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:06.826 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:06.826 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:06.826 Removing: /var/run/dpdk/spdk3/config 00:22:06.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:06.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:06.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:06.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:06.826 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:06.826 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:06.826 Removing: /var/run/dpdk/spdk4/config 00:22:06.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:06.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:06.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:06.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:06.826 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:06.826 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:07.085 Removing: /dev/shm/nvmf_trace.0 00:22:07.085 Removing: /dev/shm/spdk_tgt_trace.pid58681 00:22:07.085 Removing: /var/run/dpdk/spdk0 00:22:07.085 Removing: /var/run/dpdk/spdk1 00:22:07.085 Removing: /var/run/dpdk/spdk2 00:22:07.085 Removing: /var/run/dpdk/spdk3 00:22:07.085 Removing: /var/run/dpdk/spdk4 00:22:07.085 Removing: /var/run/dpdk/spdk_pid58536 00:22:07.085 Removing: /var/run/dpdk/spdk_pid58681 00:22:07.085 Removing: /var/run/dpdk/spdk_pid58879 00:22:07.085 Removing: /var/run/dpdk/spdk_pid58965 00:22:07.085 Removing: /var/run/dpdk/spdk_pid58993 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59108 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59113 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59231 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59427 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59568 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59632 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59708 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59799 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59871 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59909 00:22:07.085 Removing: /var/run/dpdk/spdk_pid59939 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60001 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60100 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60533 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60585 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60636 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60652 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60719 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60731 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60801 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60813 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60858 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60876 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60922 00:22:07.085 Removing: /var/run/dpdk/spdk_pid60940 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61062 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61097 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61169 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61225 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61249 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61308 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61342 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61377 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61411 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61446 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61480 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61515 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61555 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61584 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61624 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61653 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61693 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61722 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61762 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61791 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61831 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61862 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61905 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61938 00:22:07.085 Removing: /var/run/dpdk/spdk_pid61977 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62013 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62077 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62172 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62476 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62492 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62530 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62538 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62559 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62578 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62597 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62618 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62637 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62645 00:22:07.085 Removing: /var/run/dpdk/spdk_pid62666 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62685 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62704 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62724 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62744 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62752 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62773 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62793 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62812 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62828 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62858 00:22:07.086 Removing: /var/run/dpdk/spdk_pid62877 00:22:07.344 Removing: /var/run/dpdk/spdk_pid62907 00:22:07.344 Removing: /var/run/dpdk/spdk_pid62971 00:22:07.344 Removing: /var/run/dpdk/spdk_pid62999 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63009 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63037 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63052 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63060 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63102 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63116 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63150 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63158 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63169 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63178 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63188 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63203 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63207 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63222 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63245 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63277 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63292 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63315 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63330 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63333 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63378 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63395 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63416 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63429 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63442 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63444 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63457 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63465 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63472 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63485 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63554 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63601 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63711 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63745 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63790 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63810 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63826 00:22:07.344 Removing: /var/run/dpdk/spdk_pid63841 00:22:07.345 Removing: /var/run/dpdk/spdk_pid63878 00:22:07.345 Removing: /var/run/dpdk/spdk_pid63899 00:22:07.345 Removing: /var/run/dpdk/spdk_pid63963 00:22:07.345 Removing: /var/run/dpdk/spdk_pid63987 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64031 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64104 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64166 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64196 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64281 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64329 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64361 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64584 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64677 00:22:07.345 Removing: /var/run/dpdk/spdk_pid64706 00:22:07.345 Removing: /var/run/dpdk/spdk_pid65022 00:22:07.345 Removing: /var/run/dpdk/spdk_pid65060 00:22:07.345 Removing: /var/run/dpdk/spdk_pid65350 00:22:07.345 Removing: /var/run/dpdk/spdk_pid65755 00:22:07.345 Removing: /var/run/dpdk/spdk_pid66025 00:22:07.345 Removing: /var/run/dpdk/spdk_pid66809 00:22:07.345 Removing: /var/run/dpdk/spdk_pid67625 00:22:07.345 Removing: /var/run/dpdk/spdk_pid67747 00:22:07.345 Removing: /var/run/dpdk/spdk_pid67815 00:22:07.345 Removing: /var/run/dpdk/spdk_pid69076 00:22:07.345 Removing: /var/run/dpdk/spdk_pid69277 00:22:07.345 Removing: /var/run/dpdk/spdk_pid72684 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73002 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73110 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73259 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73279 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73307 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73334 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73432 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73569 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73725 00:22:07.345 Removing: /var/run/dpdk/spdk_pid73807 00:22:07.345 Removing: /var/run/dpdk/spdk_pid74000 00:22:07.345 Removing: /var/run/dpdk/spdk_pid74089 00:22:07.345 Removing: /var/run/dpdk/spdk_pid74181 00:22:07.345 Removing: /var/run/dpdk/spdk_pid74485 00:22:07.345 Removing: /var/run/dpdk/spdk_pid74865 00:22:07.345 Removing: /var/run/dpdk/spdk_pid74867 00:22:07.345 Removing: /var/run/dpdk/spdk_pid75148 00:22:07.345 Removing: /var/run/dpdk/spdk_pid75162 00:22:07.345 Removing: /var/run/dpdk/spdk_pid75186 00:22:07.345 Removing: /var/run/dpdk/spdk_pid75212 00:22:07.345 Removing: /var/run/dpdk/spdk_pid75223 00:22:07.345 Removing: /var/run/dpdk/spdk_pid75520 00:22:07.345 Removing: /var/run/dpdk/spdk_pid75569 00:22:07.604 Removing: /var/run/dpdk/spdk_pid75840 00:22:07.604 Removing: /var/run/dpdk/spdk_pid76043 00:22:07.604 Removing: /var/run/dpdk/spdk_pid76420 00:22:07.604 Removing: /var/run/dpdk/spdk_pid76927 00:22:07.604 Removing: /var/run/dpdk/spdk_pid77750 00:22:07.604 Removing: /var/run/dpdk/spdk_pid78333 00:22:07.604 Removing: /var/run/dpdk/spdk_pid78335 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80243 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80305 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80365 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80424 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80541 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80601 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80656 00:22:07.604 Removing: /var/run/dpdk/spdk_pid80722 00:22:07.604 Removing: /var/run/dpdk/spdk_pid81034 00:22:07.604 Removing: /var/run/dpdk/spdk_pid82198 00:22:07.604 Removing: /var/run/dpdk/spdk_pid82338 00:22:07.604 Removing: /var/run/dpdk/spdk_pid82575 00:22:07.604 Removing: /var/run/dpdk/spdk_pid83126 00:22:07.604 Removing: /var/run/dpdk/spdk_pid83281 00:22:07.604 Removing: /var/run/dpdk/spdk_pid83439 00:22:07.604 Removing: /var/run/dpdk/spdk_pid83536 00:22:07.604 Removing: /var/run/dpdk/spdk_pid83724 00:22:07.604 Removing: /var/run/dpdk/spdk_pid83835 00:22:07.604 Removing: /var/run/dpdk/spdk_pid84489 00:22:07.604 Removing: /var/run/dpdk/spdk_pid84520 00:22:07.604 Removing: /var/run/dpdk/spdk_pid84562 00:22:07.604 Removing: /var/run/dpdk/spdk_pid84811 00:22:07.604 Removing: /var/run/dpdk/spdk_pid84848 00:22:07.604 Removing: /var/run/dpdk/spdk_pid84878 00:22:07.604 Removing: /var/run/dpdk/spdk_pid85300 00:22:07.604 Removing: /var/run/dpdk/spdk_pid85317 00:22:07.604 Removing: /var/run/dpdk/spdk_pid85574 00:22:07.604 Removing: /var/run/dpdk/spdk_pid85691 00:22:07.604 Removing: /var/run/dpdk/spdk_pid85705 00:22:07.604 Clean 00:22:07.604 16:36:53 -- common/autotest_common.sh@1451 -- # return 0 00:22:07.604 16:36:53 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:07.604 16:36:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.604 16:36:53 -- common/autotest_common.sh@10 -- # set +x 00:22:07.604 16:36:53 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:07.604 16:36:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.604 16:36:53 -- common/autotest_common.sh@10 -- # set +x 00:22:07.604 16:36:53 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:07.604 16:36:53 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:07.604 16:36:53 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:07.604 16:36:53 -- spdk/autotest.sh@391 -- # hash lcov 00:22:07.604 16:36:53 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:07.604 16:36:53 -- spdk/autotest.sh@393 -- # hostname 00:22:07.604 16:36:53 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:07.861 geninfo: WARNING: invalid characters removed from testname! 00:22:34.398 16:37:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:36.929 16:37:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:39.462 16:37:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:42.752 16:37:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:45.305 16:37:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:47.867 16:37:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:50.399 16:37:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:50.399 16:37:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.399 16:37:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:50.399 16:37:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.399 16:37:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.399 16:37:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.399 16:37:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.399 16:37:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.399 16:37:35 -- paths/export.sh@5 -- $ export PATH 00:22:50.399 16:37:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.399 16:37:35 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:50.399 16:37:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:50.399 16:37:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721061455.XXXXXX 00:22:50.399 16:37:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721061455.qvaBqh 00:22:50.399 16:37:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:50.399 16:37:35 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:50.399 16:37:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:50.399 16:37:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:50.399 16:37:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:50.399 16:37:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:50.399 16:37:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:50.399 16:37:35 -- common/autotest_common.sh@10 -- $ set +x 00:22:50.399 16:37:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:50.399 16:37:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:50.399 16:37:35 -- pm/common@17 -- $ local monitor 00:22:50.399 16:37:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:50.399 16:37:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:50.399 16:37:35 -- pm/common@25 -- $ sleep 1 00:22:50.399 16:37:35 -- pm/common@21 -- $ date +%s 00:22:50.399 16:37:35 -- pm/common@21 -- $ date +%s 00:22:50.399 16:37:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721061455 00:22:50.399 16:37:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721061455 00:22:50.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721061455_collect-vmstat.pm.log 00:22:50.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721061455_collect-cpu-load.pm.log 00:22:51.593 16:37:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:51.593 16:37:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:51.593 16:37:36 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:51.593 16:37:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:51.593 16:37:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:51.593 16:37:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:51.593 16:37:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:51.593 16:37:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:51.593 16:37:36 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:51.593 16:37:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:51.593 16:37:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:51.593 16:37:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:51.593 16:37:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:51.593 16:37:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:51.593 16:37:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:51.593 16:37:37 -- pm/common@44 -- $ pid=87428 00:22:51.593 16:37:37 -- pm/common@50 -- $ kill -TERM 87428 00:22:51.593 16:37:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:51.593 16:37:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:51.593 16:37:37 -- pm/common@44 -- $ pid=87430 00:22:51.593 16:37:37 -- pm/common@50 -- $ kill -TERM 87430 00:22:51.593 + [[ -n 5107 ]] 00:22:51.593 + sudo kill 5107 00:22:51.602 [Pipeline] } 00:22:51.623 [Pipeline] // timeout 00:22:51.629 [Pipeline] } 00:22:51.648 [Pipeline] // stage 00:22:51.654 [Pipeline] } 00:22:51.674 [Pipeline] // catchError 00:22:51.683 [Pipeline] stage 00:22:51.686 [Pipeline] { (Stop VM) 00:22:51.701 [Pipeline] sh 00:22:51.980 + vagrant halt 00:22:56.180 ==> default: Halting domain... 00:23:01.458 [Pipeline] sh 00:23:01.737 + vagrant destroy -f 00:23:05.994 ==> default: Removing domain... 00:23:06.006 [Pipeline] sh 00:23:06.286 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:06.295 [Pipeline] } 00:23:06.343 [Pipeline] // stage 00:23:06.350 [Pipeline] } 00:23:06.369 [Pipeline] // dir 00:23:06.374 [Pipeline] } 00:23:06.389 [Pipeline] // wrap 00:23:06.395 [Pipeline] } 00:23:06.410 [Pipeline] // catchError 00:23:06.422 [Pipeline] stage 00:23:06.424 [Pipeline] { (Epilogue) 00:23:06.439 [Pipeline] sh 00:23:06.766 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:13.347 [Pipeline] catchError 00:23:13.350 [Pipeline] { 00:23:13.365 [Pipeline] sh 00:23:13.647 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:13.647 Artifacts sizes are good 00:23:13.657 [Pipeline] } 00:23:13.675 [Pipeline] // catchError 00:23:13.687 [Pipeline] archiveArtifacts 00:23:13.694 Archiving artifacts 00:23:13.850 [Pipeline] cleanWs 00:23:13.863 [WS-CLEANUP] Deleting project workspace... 00:23:13.863 [WS-CLEANUP] Deferred wipeout is used... 00:23:13.869 [WS-CLEANUP] done 00:23:13.872 [Pipeline] } 00:23:13.892 [Pipeline] // stage 00:23:13.899 [Pipeline] } 00:23:13.917 [Pipeline] // node 00:23:13.923 [Pipeline] End of Pipeline 00:23:13.961 Finished: SUCCESS